Law enforcement agencies worldwide, including the FBI and Interpol, are expressing concern over parent company Meta’s plans to roll out expanded end-to-end encryption, warning it could effectively “blindfold” the firm from detecting incidents of child sex abuse. The Virtual Global Taskforce, a coalition of 15 law enforcement organizations tasked with protecting children from such crimes, pointed out Meta in a joint statement urging tech companies to consider secure protocols when instituting end-to-end encryption.
“The announced implementation of [end-to-end encryption] on Meta platforms Instagram and Facebook is an example of a purposeful design choice that degrades safety systems and weakens the ability to safeguard child users,” the Virtual Global Taskforce said in a policy statement. The officials argue end-to-end encryption, while a sought-after privacy feature for secure communications, could make it more difficult for companies like Meta to identify criminal behavior occurring on their platforms.
“The VGT calls for all industry partners to fully understand the impact of implementing system design decisions that result in blinding themselves to [child sex abuse] occurring on their platforms, or reduce their capacity to detect CSA and keep kids secure,” the agencies added.
“The abuse will not stop just because companies decide to cease looking,” the agencies added. Meta has indicated plans to roll out end-to-end encryption for messages on all of its platforms – with one company executive once stating the feature would be enabled by default “sometime in 2023.” Meta-controlled WhatsApp already offers the feature by default. Earlier this year, Meta published a blog post detailing expanded end-to-end encryption on its Messenger platform.
The Financial Times was first to report on the Virtual Global Taskforce’s statement. The outlet noted that UK lawmakers are currently working on an online safety bill that has drawn criticism from tech giants who allege it will hurt user privacy. The proposed legislation would empower the UK’s telecom regulator, the Office of Communications, to require companies to monitor some messages for instances of child abuse.
An open letter signed by various tech bosses, including WhatsApp chief Will Cathcart, argued the bill would “give an unelected official the power to weaken the privacy of billions of people around the world” by scrutinizing encrypted messages. Meta defended its safety practices in a statement obtained by the FT.
“The vast majority of Brits already rely on apps that use encryption. We don’t think people want us reading their private messages, so we have developed safety measures that prevent, detect and allow us to take action against this appalling abuse, while preserving online privacy and security,” a Meta spokesperson said in the statement.
“As we continue to roll out our end-to-end encryption plans, we remain dedicated to working with law enforcement and child safety experts to ensure that our platforms are safe for young people.” The Post has reached out to Meta for further comment. Meta has faced intense criticism from US legislators over its safety practices, with detractors arguing the tech giant hasn’t gone far enough to protect its underage users from harmful content and abuse.
As The Post reported earlier this month, online safety experts penned an open letter urging Meta CEO Mark Zuckerberg to abandon the company’s plans to let children and teen users access its new metaverse service “Horizon Worlds” due to concern about potential abuse. The Virtual Global Taskforce is an alliance of 15 law enforcement agencies from around the world, including the FBI, US Immigration and Customs Enforcement (ICE), Interpol, Europol, and the United Kingdom’s National Crime Agency, with the latter serving as the group’s chair. The task force’s website describes the group as “an international coalition of 15 committed law enforcement agencies collaborating to address the global threat from child sexual abuse.”
Second Life: the First Bot Connected to ChatGPT Designed for Virtual Sex
Humanity Reaches a New Frontier: ChatGPT-Powered Avatar Sex Bots. We now live in a world where technology reaches far and wide, opening up unimaginable possibilities. From driverless cars to artificial intelligence (AI), the potential for innovation and advancement within the tech world appears infinite. One emerging trend introduces AI avatar sex bots connected with ChatGPT for an unprecedented level of virtual sex simulation.
The brainchild of Stone Johnson, a scientist in applied physics, these bots combine ChatGPT with animesh and built-in senses to integrate fully into Second Life. Powered with GPT 3.5 Turbo, they possess a broad set of animations and can easily recognize objects, people and actions within their environment. They give feedback on touch, proprioception, ‘vision’ and more, and can even move around and respond to their environment.
This slinky, body-hugging avatars have been created to tap into the inherent desire most people have for virtual simulation. However, Johnson insists he has higher, more informative purposes in mind. He believes that Second Life will be a vital means of exploring the capabilities of AI, allowing for the evaluation of the new technology in a low-risk, virtual environment with people able to study the behavior and responses of the avatar sex bots.
In terms of practical applications, Johnson’s bots can be used for more than just pixel sex. As some of his clients have used them for companionship, he envisions the successful deployment of his bots in retail stores. With their AI apparatus, they could serve people in various departments with an impressive level of autonomy, greatly improving customer service and potentially leading to the creation of entirely AI-operated malls or stores in the future.
However, these bots must still confront the ethical considerations of AI use, involving privacy and security issues. While Johnson admits the bots are not sentient, they can provide a level of comfort and interaction more than what many social media conversations offer, ensuring they are not a mere sex toy but instead part of our daily lives and routines.
At present, purchasing one of these bots requires an OpenAI account, with each question-and-answer session powered by the user costing less than half a penny. As technology gains traction, the implications of both commercial and sexual virtual experiences will be harder to ignore. Will AI and the concept of virtual intimacy join the wider movement toward acceptance, or does society still have reservations?
Only time will tell the answer. However, one thing is certain: the introduction of GPT-powered avatar sex bots into the world of virtual scenarios is a noteworthy moment in the development of new technology. The brave new world of virtual sexuality awaits us, ready to be explored!
Tinder: Action Against OnlyFans and Sugar Daddy Ads on Its Platform
Tinder is taking a tough stance on users who try to promote businesses or advertise sex work on the dating app. The company has announced that it will remove any social media handles listed in public user profile bios to combat this behavior. “Tinder is not a place to promote businesses to try making money. Members shouldn’t advertise, promote, or share social handles or links to gain followers, sell things, fundraise, or campaign,” Tinder announced on Thursday. This policy refresh is designed to keep the app a “fun and safe place” for meeting people where “realness” is key, according to Tinder.
“To guide these younger daters as they start their dating journey, Tinder is using this policy refresh to remind and educate members about healthy dating habits — both online and in real life,” said Tinder’s senior vice president of member strategy Ehren Schlue. “Tinder isn’t the place for any sort of sex work, escort services, or compensated relationships. So, no — don’t use Tinder to find your sugarmamma.”
The dating app’s updated community guidelines go further, with a new paragraph explicitly stating that Tinder is not the place for any sort of sex work, escort services, or compensated relationships. This is part of the company’s effort to distance itself from the hookup app label it has been given in the past and redefine its cultural significance away from hookups and towards healthy relationships.
Minnesota Takes the Lead: Deepfake Regulation Set to Materialize by May 22nd
The Minnesota State Senate voted to pass a bill intended to address the misuse of artificial intelligence (AI) generated video, images, and sound known as “deepfakes.” The often convincing yet fabricated media created with these technologies have caused alarm among ethicists, political observers, and others due to the potential for misuse of election manipulation and the nonconsensual distribution of sexual images.
If signed into law, the bill would criminalize the creation and distribution of false digital media without the consent of the person or persons depicted, including digital pornography. In addition, the bill provides for a felony penalty on the second offense of knowingly posting a pornographic deepfake to a website, disseminating it for profit, using it to harass a person, or if it’s a repeat offense within a five-year period.
Widespread support from both parties suggests that this bill may soon become law by May 22nd. Other states, such as California and Texas, have already enacted similar legislation as deepfake technology has become exponentially more accessible in recent years.
Sen. Erin Maye Quade, DFL-Apple Valley, a cosponsor of the bill, said, “Deepfake technology has the power to damage reputations, ruin lives, and even threaten the integrity of our democracy.” House bill sponsor Rep. Zack Stephenson, DFL-Coon Rapids, added that, “Minnesota already has a statute prohibiting revenge porn or the nonconsensual distribution of private sexual images, but this would not apply to pornographic deepfakes.”
The bill provides for the ability of victims of sexual deepfakes to sue the creators for damages and have the images taken down from the internet, as well as criminalizing the distribution of videos altered within 60 days of an election with the intent to injure a candidate or influence the outcome of the election.
Sen. Nathan Wesenberg, R-Little Falls, was the only Senator to vote against the bill, stating that he wanted to see higher civil fines included for deepfake offenses. With that said, it appears that this bill is soon to become law in Minnesota, setting an example for others to follow in order to combat the rising danger of deepfakes.
Warning: Undefined variable $user_ID in /home/bcamsmagazine/public_html/wp-content/themes/zox-news/comments.php on line 49
You must be logged in to post a comment Login
This site uses User Verification plugin to reduce spam. See how your comment data is processed.