Latest News
Protecting EU Citizens from Nonconsensual Pornographic Deepfakes Law in 2023
The European Union’s current and proposed laws fail to adequately protect citizens from the harms of nonconsensual pornographic deepfakes—AI-generated images, audio, or videos that use an individual’s likeness to create pornographic material without their consent. To protect victims of this abuse, the EU must take steps to amend existing legislative proposals and encourage soft law approaches.
Although deepfakes have legitimate commercial uses, 96 percent of deepfake videos found online are nonconsensual pornography. Perpetrators can use them to harass, extort, offend, defame, or embarrass individuals by superimposing their likeness onto sexual material without permission. The ease of creating and distributing deepfakes due to the increasing availability of AI tools has made this form of abuse easier than ever.
The Digital Services Act (DSA) obliges platforms to demonstrate the procedures by which illegal content can be reported and taken down. However, this will have little impact on the spread of nonconsensual pornographic deepfakes since the bill does not classify them as illegal. The DSA also does not cover 94 percent of deepfake pornography, which is hosted on dedicated pornographic websites instead of mainstream platforms. Moreover, the EU dropped a proposal in the DSA that would have required porn sites hosting user-generated content to swiftly remove material flagged by victims as depicting them without permission.
The Artificial Intelligence (AI) Act, likely to pass into law in 2023, requires creators to disclose deepfake content. But this does little to protect victims, as the demand for deepfakes does not depend on their authenticity. The Directive on Gender-Based Violence proposed in 2022 criminalizes sharing intimate images without consent and could include deepfakes in its scope. However, the bill fails to cover nudity that is not explicitly sexual and sexual imagery that is not wholly nude. Moreover, it only applies to material made accessible to many end-users when even sharing deepfakes with a single person can cause great harm.
These legislative proposals must be amended to protect victims better and deter perpetrators. Additionally, the EU should encourage soft law approaches such as public awareness campaigns, self-regulatory codes of practice, and the development of deepfake detection tools by law enforcement. With a combination of hard and soft law approaches, the EU can protect its citizens from the harms of nonconsensual pornographic deepfakes.
You may like
The US State Department’s Misunderstanding of Virtual Reality Sex Work
AI Podcast: Uncover the Secrets of AI-Powered Sex Advice
Managing your business efficiently with Studio.cam application for videochat studios
Valentine’s Day celebration is at CamContacts: $100,000.00 worth of reasons to celebrate it with your favorite Cam Models
Blurring Out Explicit Imagery in Google Search Results By Default
Apple and Google Urged to Ban TikTok Over National Security Concerns
Facebook and Instagram Tighten Advertising Rules for Teens
Uncovering the Fetish Videos Lurking on TikTok
Gender Inclusion: A Step Forward in Meta’s Adult Content Nudity Policy
From AI-generated soft porn to voice cloning, here’s how technology is being used as a tool for scams
Is age verification effective in preventing children from viewing pornography?
VR Porn, what do you need to know?
What turns into sexting, the mind has been testing
The largest Webcam full-service center in the industry has arrived in Romania.
MuSex – Celestial Bodies
A POPULAR WEBCAM SITE EUROLIVE.COM JOINED BONGACAMS!
The Most Luxurious Sex Toys In The World
#1 Cam Site BongaCams Launches an Anniversary NFT Collection to Celebrate Its 10th Birthday!
Naughty Or Nice?
Audio Porn & Erotica
Latest News
Judge Blocks Utah Social Media Age Verification Law
A federal judge in Utah has temporarily blocked a state law protecting children’s privacy and limiting their social media use, declaring it unconstitutional.
U.S. District Court Judge Robert Shelby issued a preliminary injunction against the law, which would have required social media companies to verify users’ ages, enforce privacy settings, and limit certain features on minors’ accounts.
The law was scheduled to take effect on October 1. Still, its enforcement is now paused pending the outcome of a case filed by NetChoice, a nonprofit trade group representing companies like Google, Meta (Facebook and Instagram’s parent company), Snap, and X. The Utah legislature had passed the Utah Minor Protection in Social Media Act in 2024, after earlier legislation from 2023 faced legal challenges. State officials believed the new law would withstand legal scrutiny, but Judge Shelby disagreed.
“The court understands the State’s desire to protect young people from the unique risks of social media,” Shelby wrote. However, he added that the state failed to provide a compelling reason to violate the First Amendment rights of social media companies.
Republican Governor Spencer Cox expressed disappointment with the court’s ruling but emphasized that the fight was necessary due to the harm social media causes to children. “Let’s be clear: social media companies could, right now, voluntarily adopt all of the protections this law imposes to safeguard our children. But they refuse, choosing profits over our kids’ well-being. This has to stop, and Utah will continue to lead this battle.”
NetChoice contends that the law would force Utah residents to provide more personal information for age verification, increasing the risk of data breaches. In 2023, Utah became the first state to regulate children’s social media use. Utah sued TikTok and Meta, accusing them of using addictive features to lure children.
Under the 2024 law, minor accounts would have default settings limiting direct messages, sharing features, and disabling autoplay and push notifications, which lawmakers say contribute to excessive use. The law would also restrict how much information social media companies could collect from minors.
Additionally, another law taking effect on October 1 allows parents to sue social media companies if their child’s mental health worsens due to excessive use of algorithm-driven apps. Social media companies must comply with various requirements, including limiting use to three hours daily and imposing a nightly blackout from 10:30 p.m. to 6:30 a.m. Violations could result in damages starting at $10,000.
NetChoice has successfully obtained injunctions blocking similar laws in California, Arkansas, Ohio, Mississippi, and Texas. “With this being the sixth injunction against these overreaching laws, we hope policymakers will pursue meaningful and constitutional solutions for the digital age,” said Chris Marchese, NetChoice’s director of litigation.
Latest News
White House Announces AI Firms’ Pledge Against Image Abuse
The White House announced this week that several leading AI companies have voluntarily committed to tackling the rise of image-based sexual abuse, including the spread of non-consensual intimate images (NCII) and child sexual abuse material (CSAM). This move is a proactive effort to curb the growing misuse of AI technologies in creating harmful deepfake content.
Companies such as Adobe, Anthropic, Cohere, Microsoft, and OpenAI have agreed to implement specific measures to ensure their platforms are not used to generate NCII or CSAM. These commitments include responsibly sourcing and managing the datasets used to train AI models, safeguarding them from any content that could lead to image-based sexual abuse.
In addition to securing datasets, the companies have promised to build feedback loops and stress-testing strategies into their development processes. This will help prevent AI models from inadvertently creating or distributing abusive material. Another crucial step is removing nude images from AI training datasets when deemed appropriate, further limiting the potential for misuse.
These commitments, while voluntary, represent a significant step toward combating a growing issue. The announcement, however, lacks participation from major tech players such as Apple, Amazon, Google, and Meta, which were notably absent from today’s statement.
Despite these omissions, many AI and tech companies have already been working independently to prevent the spread of deepfake images and videos. StopNCII, an organization dedicated to stopping the non-consensual sharing of intimate images, has teamed up with several companies to create a comprehensive approach to scrubbing such content. Additionally, some businesses are introducing their own tools to allow victims to report AI-generated sexual abuse on their platforms.
While today’s announcement from the White House doesn’t establish new legal consequences for companies that fail to meet their commitments, it is still an encouraging step. By fostering a cooperative effort, these AI companies are taking a stand against the misuse of their technologies.
For individuals who have been victims of non-consensual image sharing, support is available. Victims can file a case with StopNCII, and for those under 18, the National Center for Missing & Exploited Children (NCMEC) offers reporting options.
In this new digital landscape, addressing the ethical concerns surrounding AI’s role in image-based sexual abuse is critical. Although the voluntary nature of these commitments means there is no immediate accountability, the proactive approach by these companies offers hope for stronger protections in the future.
Source: engadget.com
Latest News
Texas AG Defends Restrictions on Targeted Ads to Teens
Texas Attorney General Ken Paxton is urging a federal judge to uphold restrictions on social media platforms’ ability to collect minors’ data and serve them targeted ads. In papers filed with U.S. District Judge Robert Pitman, Paxton argues that the coalition challenging the law has no grounds to proceed, as the law only applies to social platforms, not users.
The Securing Children Online through Parental Empowerment Act (HB 18) requires social platforms to verify users’ ages and use filtering technology to block harmful content, including material that promotes eating disorders, self-harm, and sexual exploitation. The bill also limits data collection from minors and prohibits targeted ads without parental consent.
The law faces challenges from two lawsuits: one from tech industry groups and another from a coalition that includes advocacy group Students Engaged in Advancing Texas and the Ampersand Group, which handles ads for nonprofits and government agencies. The coalition claims the law will prevent them from delivering public service ads, such as fentanyl warnings or sex trafficking alerts, to teens.
In a previous ruling, Judge Pitman blocked parts of the law requiring content filtering but left in place restrictions on data collection and targeted advertising. The judge stated those provisions might be challenged later.
Paxton contends that the coalition’s arguments are too vague, questioning the specifics of their ad plans and whether the law targets only commercial advertising. Judge Pitman has not yet issued a final ruling on the coalition’s request.
Trending
- Selfcare & Sexual Wellness9 months ago
The Dark Side of the Playboy Mansion
- Selfcare & Sexual Wellness9 months ago
Adult Star’s Choice to Avoid Pregnancy at Work
- Cam Models3 years ago
EmilyJonesChat
- Finance & Business3 months ago
BCAMS Magazine, the 22th issue!
- Finance & Business7 months ago
BCAMS Magazine, the 21th issue!
- Cam Models2 years ago
Demora Avarice
Warning: Undefined variable $user_ID in /home/bcamsmagazine/public_html/wp-content/themes/zox-news/comments.php on line 49
You must be logged in to post a comment Login
This site uses User Verification plugin to reduce spam. See how your comment data is processed.