Senator Michael Bennet of Colorado, a member of the Senate Intelligence Committee, has called on Apple and Alphabet to remove the popular social media platform TikTok from their app stores over national security concerns. Bennet claims the Chinese-owned company poses a threat to the US due to its data harvesting practices, which can be accessed by Chinese intelligence work in accordance with Chinese law.
Bennet’s letter to the tech giants echoes the same sentiments expressed by an FCC commissioner in June last year. He argues that TikTok’s reach and popularity allow it to collect sensitive data from American users, including device information, search and viewing histories, IP addresses, faceprints, and voiceprints.
The US government has already taken action against TikTok’s ties to China. In December, President Joe Biden signed a bill prohibiting the use of TikTok by nearly four million government employees on devices owned by its agencies. At least 27 state governments have passed similar measures.
In response to Bennet’s letter, TikTok released a statement claiming the claims were “misleading.” The company said they had implemented Project Texas, an investment plan to provide additional assurances to their community about their data security and the platform’s integrity.
TikTok has over 100 million active users, 36 percent American, who spend over 80 minutes daily on the app — more than Facebook and Instagram combined. There is no evidence that the Chinese government has demanded user data from TikTok or its parent company or influenced the content users to see on the platform. However, in November, TikTok confirmed that China-based employees could gain remote access to European user data, and a BuzzFeed report revealed that company employees in China had access to US user data.
The US Committee on Foreign Investment is reviewing ByteDance’s 2017 merger of TikTok and Musical.ly. It may force TikTok to sell to a US company, similar to the executive order issued by former President Donald Trump in 2020 — though it was blocked by a federal court.
In 2021, TikTok agreed to pay $92 million to settle lawsuits alleging that the app clandestinely transferred vast quantities of user data on children to servers in China. To curb criticism of its data-sharing practices, TikTok has announced a partnership with Oracle to move its data on US users stored on foreign servers to Texas.
Anupam Chander, a professor of law and technology at Georgetown University, warned that banning TikTok in the US may encourage other countries to do the same to apps and services from the US and that it is unclear if anything short of a sale will satisfy TikTok’s critics. TikTok’s chief executive Shou Zi Chew will appear before a House committee in March.
Utah Passes Groundbreaking App Store Age Verification Law
Utah is the first U.S. state to require app stores to verify users’ ages and obtain parental consent before minors can download apps. The App Store Accountability Act shifts responsibility from websites to app stores, gaining support from Meta, Snap, and X. However, critics argue the law raises privacy concerns and could face legal challenges over free speech rights.
Utah has passed the App Store Accountability Act, making it the first U.S. state to require app stores to verify users’ ages and obtain parental consent for minors downloading apps. The law aims to enhance online safety for children, though similar regulations have faced legal opposition.
The law shifts the responsibility of verification from websites to app store operators like Apple and Google. Meta, Snap, and X support the move, stating that parents want a centralized way to monitor their children’s app activity. They have also urged Congress to adopt a federal approach to avoid inconsistencies across states.
Despite this support, privacy advocates and digital rights groups argue that requiring age verification could compromise user privacy and limit access to online content. The Chamber of Progress warns that this could infringe on free speech and constitutional rights.
Legal challenges are likely. A federal judge previously blocked a similar law in Utah, citing First Amendment violations. Opponents expect lawsuits that could delay or overturn the legislation.
As states push for stricter digital protections for minors, Utah’s law could serve as a test case for future regulations—if it survives expected legal battles.
Alibaba’s latest AI video generation model, Wan 2.1, was meant to be a breakthrough in open-source technology. However, within a day of its release, it was adopted by AI porn creators, sparking concerns over its potential for misuse. While open AI modelsdemocratize access to powerful tools, they also raise ethical issues, particularly in the creation of non-consensual content. The rapid adoption of Wan 2.1 highlights this ongoing challenge.
Alibaba, the Chinese tech giant, recently released its new AI video generation model, Wan 2.1, making it freely accessible to those with the necessary hardware and expertise. While this open-source approach empowers developers and researchers, it also comes with a dark side. Within just 24 hours, the AI porn community seized the opportunity to produce and share dozens of explicit videos using the new software.
Even more concerning is the reaction from a niche online community dedicated to creating nonconsensual AI-generated intimate media of real people. Users on Telegram and similar platforms quickly celebrated Wan 2.1’s capabilities, praising its ability to handle complex movements and enhance the quality of AI-generated adult content. One user, referring to Tencent’s Hunyuan AI model (another tool popular in these circles), noted, “Hunyuan was released just in December, and now we have an even better text-to-video model.”
This is the ongoing dilemma of open AI models. On one hand, they offer groundbreaking possibilities, allowing developers to experiment, innovate, and improve AI technology. On the other, they can be easily exploited to create unethical and harmful content, including deepfake pornography.
Rapid Adoption in AI Porn Communities
The speed at which Wan 2.1 was adapted for explicit content was staggering. The first modifications of the model appeared almost immediately on Civitai, a site known for hosting AI-generated models. By the time initial reports surfaced, multiple variations of Wan 2.1 had already been downloaded hundreds of times. Users on Civitai enthusiastically shared AI-generated pornographic videos, many of which were created using these modified models.
Civitai’s policies prohibit the sharing of nonconsensual AI-generated pornography, but loopholes remain. While the site does not host nonconsensual content directly, it allows users to download models that can be used elsewhere for illicit purposes. Previous investigations have shown that once these models are accessible, there is little stopping users from misusing them in private or unregulated online spaces.
The Bigger Issue: Ethics of Open AI Models
The release of open-source AI models like Wan 2.1 is a double-edged sword. Open models promote innovation, allowing developers to refine AI technology for legitimate purposes such as filmmaking, animation, and content creation. However, as seen with Wan 2.1, early adopters often push the boundaries of ethical use, leading to misuse in inappropriate or even illegal ways.
Despite mounting concerns, Alibaba has remained silent on the issue. The company has yet to respond to inquiries regarding the misuse of its AI model. This raises questions about the responsibilities of tech giants when it comes to the unintended consequences of their AI releases. Should companies impose stricter regulations on how their AI models are used? Or is it the responsibility of platforms and communities to enforce ethical guidelines?
What Comes Next?
As AI-generated content becomes increasingly sophisticated, the challenge of regulating its use grows more complex. Open-source AI models are powerful tools, but they must be released with safeguards in place to prevent misuse. Without proper oversight, the line between innovation and exploitation will continue to blur, leaving room for ethical dilemmas and legal concerns.
For now, Wan 2.1 stands as yet another example of how quickly AI technology can be both a breakthrough and a battleground. The question remains—how will companies like Alibaba address these issues moving forward?
SexLikeReal (SLR) has launched SLR For Women, its first dedicated VR porn vertical offering a female-first perspective. This initiative utilizes the platform’s chroma suit passthrough technology to create immersive experiences tailored for female viewers.
A New Approach to VR Adult Content
SLR For Women debuted with a VR porn scene featuring Danny Steele and Alicia Williams, filmed using chroma passthrough technology. The female performer wears a chroma suit, allowing only her genitals to remain visible, maintaining a first-person perspective experience.
While female-perspective VR porn exists across various platforms, SLR’s entry is notable due to its technological advancements and strong user engagement. The company is inviting female users to submit scripts, with the best ideas set to be produced as POV VR scenes by its top production team.
Future Expansion & User Involvement
Currently, the SLR For Women section features just one scene, posted over three weeks ago. Although no rush of female subscribers is expected yet, SLR has indicated plans for more female-focused content and encourages user feedback to shape its future releases.
SLR has previously introduced AI-powered passthrough technology, allowing non-chroma-shot videos to be converted into passthrough VR, as well as the world’s first AR cam rooms for live streaming. Whether this new venture will receive continued investment remains to be seen, but the launch signals an industry shift towards more inclusive VR experiences.
Warning: Undefined variable $user_ID in /home/bcamsmagazine/public_html/wp-content/themes/zox-news/comments.php on line 49
You must be logged in to post a comment Login