>WhatsApp to Implement Age Verification in the US - BCAMS MAGAZINE
Connect with us

Tech & IT

WhatsApp to Implement Age Verification in the US

Revealing one’s birth year can be a sensitive matter, and many prefer to keep such personal details private. However, WhatsApp will soon require US users to provide their birth year to comply with new age verification laws.


Meta, the parent company of WhatsApp, has not officially confirmed this change, but multiple reports indicate that certain US states have passed laws mandating age verification. These laws aim to restrict minors’ access to explicit material and ensure that children cannot access such content without parental consent.

The new regulations are being implemented more rapidly than expected, primarily in Republican-led states. Various bills are currently under development in different states.

To comply, WhatsApp plans to integrate a feature in its newest beta version that requires users to input their birth year. According to WABetaInfo, this will become a mandatory part of the setup process. The app will also warn users that they cannot change this information later.

Although the exact timeline for this change is unclear, leaked information suggests that Meta will not announce the specific date in advance. This update is necessary for users to comply with state age laws.

Experts believe that only residents of states with these laws or those visiting such states will need to comply immediately. This situation is similar to how certain sites like Pornhub have responded to age verification requirements.

Currently, the states enforcing these laws include Alabama, Idaho, Nebraska, South Carolina, Florida, Oklahoma, South Dakota, Kansas, Tennessee, Indiana, and Georgia.

Click to comment

Warning: Undefined variable $user_ID in /home/bcamsmagazine/public_html/wp-content/themes/zox-news/comments.php on line 49

You must be logged in to post a comment Login

This site uses User Verification plugin to reduce spam. See how your comment data is processed.

Tech & IT

Mastercard Pilots Crypto Credential Network

Mastercard has started testing its Crypto Credential network, aiming to simplify and secure cross-border digital asset transactions between Latin America and Europe.


Launched last year, Mastercard Crypto Credential ensures verified interactions among consumers and businesses using blockchain networks.

During these pilots, users from various countries on the Bit2Me, Lirium, and Mercado Bitcoin exchanges can send both cross-border and domestic transfers across multiple currencies and blockchains.

Instead of using the typically long and complex blockchain addresses, users can now send and receive crypto using their Mastercard Crypto Credential aliases.

For payments, exchanges will first verify users according to Mastercard Crypto Credential standards. Once verified, users receive an alias to facilitate sending and receiving funds across all supported exchanges.

When a user initiates a transfer, Mastercard Crypto Credential checks that the recipient’s alias is valid and that their wallet supports the digital asset and associated blockchain. If the receiving wallet doesn’t support the asset or blockchain, the sender is notified, and the transaction is halted to prevent potential loss of funds.

Mastercard believes this system could greatly benefit the remittance market and plans to extend support to NFTs, ticketing, and other payment options.

Walter Pimenta, EVP, product and engineering, Latin America and the Caribbean, Mastercard, states, “As interest in blockchain and digital assets continues to surge in Latin America and around the world, it is essential to keep delivering trusted and verifiable interactions across public blockchain networks.”

Source: finextra.com

Continue Reading

Tech & IT

Apple Started to Remove Apps for Creating AI-Generated Adult

Apple has taken action against three apps on its App Store after they were found to be advertising capabilities to create AI-generated pornographic images. These apps, which were misleadingly described as “art generators,” actually offered features that could simulate nude images of individuals without their consent. This function has the potential to be misused for harassment or blackmail.


The issue was highlighted by 404 Media, which reported to Apple about these apps advertising on Instagram and adult websites with promises to “Undress any girl for free.” Despite the capabilities being hidden, some apps also featured face swap technology for adult content. Apple struggled initially to locate these problematic apps until specific links and advertisements were provided by 404 Media.

These apps had been available on the App Store since as early as 2022, and appeared innocent enough to both Apple and Google, escaping scrutiny by being listed on their respective platforms. Even after the discovery, there was a delay in their removal; the apps were allowed to remain available as long as they ceased their inappropriate advertising. However, the oversight continued until Google eventually removed one from the Play Store earlier this year for failing to comply with these conditions.

This incident comes at a sensitive time for Apple as it prepares to announce significant AI enhancements to iOS 18 and Siri at the upcoming Worldwide Developers Conference (WWDC) in June. The company is making efforts to maintain a clean corporate image, which includes licensing content legally for AI training amidst growing concerns over copyright issues in the industry. This situation poses a challenge to Apple’s efforts to keep its reputation untarnished in the evolving AI landscape.

Source: phonearena.com, 404 Media

Continue Reading

Tech & IT

Alabama Expands Laws Against AI-Generated Abuse

Alabama has strengthened its child pornography laws by including AI-generated deepfake images, a response to a distressing cyberbullying incident involving deepfakes at a local middle school. Governor Kay Ivey emphasized the importance of adapting legislation to safeguard children against emerging digital threats.


In a significant legislative move, Governor Kay Ivey of Alabama signed a new bill on Tuesday that broadens the state’s child pornography laws to prohibit the creation of deepfake images using artificial intelligence. This bill, known as HB168, was prompted by a cyberbullying case at Demopolis Middle School where deepfake technology was maliciously used.

During a speech, Governor Ivey expressed her commitment to ensuring that technological advancements do not compromise public safety, particularly the safety of young individuals. “As we navigate this rapidly changing world, safeguarding our children becomes paramount, and this legislation is a step in that direction,” she remarked.

The Alabama Child Protection Act of 2024 was crafted by State Senator April Weaver and Representative Matt Woods in response to an incident where images of six female students were digitally manipulated to appear in inappropriate contexts, causing considerable distress. This incident came to light at a school board meeting in December and has since been under investigation by local law enforcement and federal agents.

Alabama Attorney General Steve Marshall lauded the new law as a critical measure in combating child exploitation. “Alabama is setting a precedent with this strict stance against AI-generated child exploitation material. Our state demonstrates unequivocal intolerance towards such abuses,” Marshall stated. He also noted that the real challenge lies ahead in effectively implementing this law to deter future violations and support ongoing enforcement efforts.

Continue Reading
Advertisement amateur.tv
Advertisement Xlovecam.com
Advertisement sexyjobs.com
Advertisement modelsearcher.com

Trending