Connect with us

Tech & IT

Telegram AI Bots and the Rise of Deepfake Abuse

With the rise of artificial intelligence, anyone with internet access can now create AI-generated pornography. On the messaging app Telegram, millions of users are exploiting AI-powered “bots” to generate nonconsensual, nude images of others.


Telegram, launched in 2013, is a messaging and social media platform known for its speed and security, boasting over 500 million active users. The app offers private messaging, video calls, and public groups where large communities can interact. However, according to a WIRED investigation, more than 50 AI-powered bots on Telegram have been used for nonconsensual image abuse.

Using AI to manipulate images for explicit content is not new, but its accessibility has grown significantly. In 2020, Henry Ajder discovered a Telegram bot designed to undress people in images. These bots vary in capability, with some removing clothing from photos and others generating explicit deepfake content. More than 4 million users engage with these bots monthly, with some individual bots amassing over 400,000 active users each.

This alarming number of participants highlights the growing normalization of deepfake exploitation. Women and children are frequently targeted, making this a significant global issue. These bots, which function as mini-apps within Telegram, are promoted and supported by Telegram channels. These channels notify users of new bot features, provide purchasing options for tokens required to use the bots, and share links to new bots if others get removed.

Although some of these bots have been taken down, nonconsensual intimate image abuse (NCII) remains a pressing problem. AI-powered deepfake tools openly market their ability to strip clothing from images. One bot even advertised: “I can do anything you want with the face or clothes in the photo you provide.”

Being a victim of NCII or deepfake abuse can be devastating. The companies behind these technologies must take responsibility. Elena Michael, co-founder of #NotYourPorn, told WIRED that Telegram has been “notoriously difficult” to engage with on safety concerns. While the company has made some progress in moderation, Michael believes Telegram must take a more proactive approach to prevent the spread of abusive content.

Social media platforms and AI developers have a duty to establish safeguards against deepfake exploitation. While AI has many positive applications, it is essential to counter its harmful uses.

Education Is Key to Fighting Deepfake Abuse

One of the most effective ways to combat AI-generated deepfakes and NCII is education. Understanding the risks and impact of this issue allows individuals to take action and raise awareness. Important conversations can inspire change and encourage more stringent policies.

Numerous resources exist to educate people about deepfake abuse and its consequences:

  • Fight The New Drug’s three-part documentary Brain Heart World explores the negative effects of pornography.
  • A collection of Fight The New Drug blog articles provides factual insights into the industry.
  • The Consider Before Consuming podcast discusses the science and research behind pornography’s impact.
  • Fight The New Drug’s YouTube channel offers personal testimonies and educational videos.

AI-generated porn may appear ethical, but if an individual’s likeness is used without consent, it is still abuse. Deepfake technology makes it nearly impossible to differentiate real from fabricated images. Many people believe what they see, causing severe emotional trauma for victims.

Take the time to educate yourself and advocate against exploitative AI technology. Together, we can create a safer digital world.

Tech & IT

Apple rolls out UK age verification with iOS 26.4 after Meta and Google child safety fines

Apple has introduced age verification for iPhone and iPad users in the UK with iOS 26.4 and iPadOS 26.4, adding a new layer of checks for accounts that require confirmation that the user is 18 or older.


According to the report, UK users may now be asked to verify their age by adding a credit card or scanning an ID, unless Apple has already confirmed that information. Apple says the process is required by law in some countries and regions for actions tied to an Apple Account, including downloading apps, changing certain settings, or accessing specific features. When verification is needed, a prompt appears in the Settings menu.

The rollout comes at a time when child safety rules are tightening across the UK. While current UK law does not specifically require device-level age verification, adult websites, including pornography platforms, are already expected to carry out age checks. That has led to wider discussion about whether verification should also happen at the device level, rather than only on individual sites.

The timing is especially notable because it follows a major child safety case involving Meta and Google. The companies were reportedly ordered to pay $6 million after a lawsuit in Los Angeles claimed that platforms including Facebook, WhatsApp, and YouTube had a serious impact on a young woman’s mental health.

Apple’s move may also reflect broader regulatory pressure. The UK government is reportedly considering stronger restrictions for under-16s on social media, similar to measures seen in Australia. Reports also indicate Apple has been working with Ofcom as these safety tools develop.

For users who cannot verify an adult identity, Apple suggests that some features may be limited or that the account may need to be placed under Family Sharing with a parent or guardian. The exact restrictions could vary depending on the situation.

Continue Reading

Tech & IT

Australia Age Checks Now Required for Porn Access

Australia has begun enforcing stricter age-verification rules for online adult content, requiring platforms to take meaningful steps to stop under-18s from accessing pornography and other age-restricted material. The Age-Restricted Material Codes for services including social media, relevant electronic services, equipment providers, and designated internet services came into effect on March 9, 2026.

Under the new framework, some services may now require proof of age before allowing access to legal adult content. Australia’s eSafety Commissioner says the accepted methods can vary by platform, but any age-assurance process must be accurate, reliable, and compliant with Australian privacy law. eSafety has said the changes are intended to reduce children’s exposure to pornography, high-impact violence, and other harmful age-inappropriate material online.

The rollout has already affected access to some major adult platforms in Australia, while debate continues over privacy risks and how effective the rules will be in practice. Recent reporting has also linked the changes to rising interest in VPN services as some users look for ways around the restrictions.

Continue Reading

Tech & IT

Apple: Age-Verification Tools Expand Worldwide With New 18+ Download Blocks

Apple is expanding its age-verification system in more countries to match stricter child-protection laws. The changes mainly affect how people download 18+ (adult-rated) apps and how developers confirm whether a user is a minor or an adult—without collecting sensitive personal details.


What’s changing for users

  • New 18+ download blocks: In Brazil, Australia, and Singapore, users must confirm they are 18 or older before downloading apps rated 18+.
  • Less access for minors to adult content: This is meant to stop children from downloading adult-only apps through the App Store.

What’s changing for developers

  • Declared Age Range API (updated): Apple is updating an API that lets apps know only an age category (example: minor vs adult), not the person’s exact age.
    • Developers do not receive private data, such as date of birth.
    • The app receives a simple “category signal” to follow local rules.
  • Parental control options: For child accounts, parents/guardians can choose whether to share age information and whether permission is required in certain situations.

Loot boxes and “gambling-like” features

Apple is also targeting apps with features regulators often consider risky for minors, such as loot boxes.

  • In Brazil, if an app includes loot boxes, Apple may automatically rate it 18+.
  • That means minors can’t download it, because the App Store will treat it as adult-only.

U.S. states: Utah and Louisiana

Apple is adding tools to help apps comply with state-level child safety laws:

  • In Utah and Louisiana, Apple can share a new user’s age category with developers.
  • The system can also flag when parental permission is required, including for major app updates.

Why Apple says it’s doing this

Apple’s message is: protect kids + respect privacy.

  • The App Store handles most of the verification.
  • Apps get only a yes/no type age signal (minor/adult), not personal identity details.
  • The goal is to comply with various laws without forcing developers to collect sensitive data.

If you want, paste the original version you want to keep, word-for-word, and I’ll rewrite it just as clear while keeping the same word count.

Continue Reading
Advertisement Xlovecam.com

Trending