Connect with us

Tech & IT

Child Exploitation via AI Deepfakes Earns 10 to 30-Year Sentence in Louisiana

As of August 1, Louisiana will enforce a newly ratified law that criminalizes the creation and possession of deepfakes illustrating child exploitation. Louisiana’s SB175, signed by Governor John Bel Edwards, prescribes stringent consequences for offenders involved in producing, distributing, or retaining illegal deepfake content featuring minors. Penalties include a compulsory prison term ranging from five to 20 years, fines up to $10,000, or a combination of both.


Deepfakes – the AI-engineered videos that distort reality by fabricating individuals, places, and occurrences – present significant obstacles for law enforcement and cybersecurity. The progression in AI technology has heightened the complexity of detecting deepfakes, highlighting the necessity for legal countermeasures. Louisiana, grappling with child welfare issues and high poverty rates, follows the lead of states like California, Texas, and Virginia in taking legal steps to restrict or ban deepfakes.

SB175 also addresses the issue of nonconsensual explicit content, often referred to as “revenge porn.” According to the law, those who knowingly advertise, distribute, or sell explicit deepfakes without consent, particularly involving minors, can face a mandatory prison sentence of 10 to 30 years, a fine up to $50,000, or both. Louisiana legislators ensured that any penalty under this law would require “hard labor.”

Deepfakes, especially those involving harm to individuals, have gained global attention. In May, deepfakes illustrating child homicide victims went viral on social media platforms, including TikTok. UN Secretary-General António Guterres has expressed concerns about the potential misuse of AI and deepfakes in inciting hatred and violence in conflict-ridden areas.

The advent of deepfakes raises questions about the reliability of visual content. Marko Jak, CEO of Secta Labs, warns society is entering an era where the authenticity of visual media cannot be assumed. Current deepfakes may be recognizable due to flaws, but as AI technology progresses, detecting convincingly realistic deepfakes could pose a significant challenge.

Criminal exploitation of deepfakes for fraudulent activities and blackmail is a growing concern for law enforcement agencies. Reports of victims, including minors, being targeted with explicit content using their photos and videos have been noted by the FBI. In a move recognizing the possible misuse of AI, Meta, responsible for the AI-generated voice platform Voicebox, refrained from releasing the technology publicly. The company emphasized the need to strike a balance between open access and responsibility in AI technology.

Louisiana’s introduction of the deepfake law signifies increasing awareness of the serious threats posed by manipulated media. The state seeks to safeguard minors from exploitation and discourage those intending to spread harmful content by criminalizing the production and possession of child exploitation deepfakes. However, managing deepfake-related crimes effectively necessitates ongoing AI advancements, as well as collaboration between tech companies, law enforcement, and policymakers.

As AI technology continues to evolve, society must remain proactive in updating legal measures and deploying advanced detection tools to counter malevolent players. A comprehensive strategy to combat deepfakes involves legal action, public awareness, and innovation in AI detection techniques. By taking such proactive measures, Louisiana, along with other jurisdictions, strives to protect vulnerable populations and preserve trust in the digital world.”

Events

Love & Sex With Robots Conference 2025

The 10th annual Love & Sex With Robots conference is scheduled for August 15-17, 2025, at the Université du Québec in Montreal. Founded by David Levy, author of Love & Sex With Robots: The Evolution of Human-Robot Relationships, this event brings together scientists, academics, and sextech professionals, as well as passionate enthusiasts known as ‘iDollators,’ who are intrigued by the future of human relationships with robots and love dolls.


The conference will feature lectures, panel discussions, and workshops, with both in-person and online options to ensure accessibility. Organizers are currently accepting abstract submissions, with a deadline of March 31, 2025. Suggested topics include robot emotions, teledildonics, roboethics, humanoid robots, and intelligent electronic sex hardware.

This year’s event is sponsored by Kiiroo, a prominent name in the sextech industry. As with previous editions, advancements in AI are expected to take center stage, sparking discussions about their impact on robotics and human-robot interactions. While keynote speakers have yet to be announced, updates are expected soon.

In addition to founder David Levy, the Love & Sex With Robots committee includes notable members such as Simon Dubé, a Kinsey Institute fellow known for his research on sex in space. Other committee members have included Bobbi Bidochka, founder of Imagine Ideation, Emily Jaworski, a graduate of the Institute of Future Studies, and Rebecca Gibson, teaching assistant professor at Virginia Commonwealth University.

The 2025 conference promises to offer fresh insights into the intersection of technology, intimacy, and human connection, making it a must-attend event for those interested in the future of robotics and sextech.

lovewithrobots.com

Continue Reading

Latest News

Britain to Criminalize Explicit Deepfakes: Two-Year Prison Sentences

Britain is set to make creating and sharing sexually explicit “deepfakes” a criminal offense, as part of new measures aimed at curbing the rising abuse of this technology, which primarily targets women and girls, the government announced on Tuesday.

Deepfakes, digitally altered videos, images, or audio clips created using artificial intelligence to appear authentic, have increasingly been misused to manipulate pornographic content into the likeness of unsuspecting individuals.


Although “revenge porn” – the publication of intimate photos or videos without consent to cause distress – was outlawed in 2015, those laws do not cover fabricated imagery like deepfakes. Data from the UK-based Revenge Porn Helpline reveals that cases involving deepfake abuse have risen by more than 400% since 2017, highlighting the urgency for updated legislation.

Under the proposed new laws, perpetrators who create or share explicit deepfakes without consent could face prosecution and criminal charges. Further, the government plans to introduce separate offenses for taking intimate images without consent and installing equipment with the intent to commit such acts. Individuals found guilty of these offenses could face up to two years in prison.

“There is no excuse for creating a sexually explicit deepfake of someone without their consent,” the Ministry of Justice stated.

These measures will form part of the upcoming Crime and Policing Bill, with additional details to be released in the coming months.

A National Response to Image-Based Abuse

Victims Minister Alex Davies-Jones condemned the growing issue, stating, “This demeaning and disgusting form of chauvinism must not become normalized.”

Campaigners, including Jess Davies, emphasized the devastating impact of deepfake abuse. “Intimate-image abuse is a national emergency causing significant, long-lasting harm to women and girls. They face a total loss of control over their digital footprint, at the hands of online misogyny,” she said.

The proposed legislation follows an earlier attempt by the previous Conservative government to introduce similar laws, which would have imposed fines and potential jail time for offenders. However, those plans were not finalized before the Labour Party took power in July.

Tech Platforms Under Pressure

Technology Minister Margaret Jones highlighted the role of tech companies, warning that platforms hosting abusive content will face tougher scrutiny and significant penalties under the new laws. The government has already taken steps to address image-based abuse by amending the Online Safety Act in 2024, requiring platforms to remove harmful content or risk enforcement action by regulators such as Ofcom.

A Step Toward Accountability

The upcoming Crime and Policing Bill is expected to bring clarity and stronger protections for victims of intimate-image abuse, marking a significant step toward addressing the misuse of AI-powered technology in Britain. While the exact date for the bill’s introduction to parliament has not been set, these measures reflect a firm commitment to holding offenders accountable and safeguarding vulnerable individuals from online abuse.

Further updates on the bill and its provisions are expected in the near future.

Source: www.gov.uk

Continue Reading

Latest News

White House Announces AI Firms’ Pledge Against Image Abuse

The White House announced this week that several leading AI companies have voluntarily committed to tackling the rise of image-based sexual abuse, including the spread of non-consensual intimate images (NCII) and child sexual abuse material (CSAM). This move is a proactive effort to curb the growing misuse of AI technologies in creating harmful deepfake content.


Companies such as Adobe, Anthropic, Cohere, Microsoft, and OpenAI have agreed to implement specific measures to ensure their platforms are not used to generate NCII or CSAM. These commitments include responsibly sourcing and managing the datasets used to train AI models, safeguarding them from any content that could lead to image-based sexual abuse.

In addition to securing datasets, the companies have promised to build feedback loops and stress-testing strategies into their development processes. This will help prevent AI models from inadvertently creating or distributing abusive material. Another crucial step is removing nude images from AI training datasets when deemed appropriate, further limiting the potential for misuse.

These commitments, while voluntary, represent a significant step toward combating a growing issue. The announcement, however, lacks participation from major tech players such as Apple, Amazon, Google, and Meta, which were notably absent from today’s statement.

Despite these omissions, many AI and tech companies have already been working independently to prevent the spread of deepfake images and videos. StopNCII, an organization dedicated to stopping the non-consensual sharing of intimate images, has teamed up with several companies to create a comprehensive approach to scrubbing such content. Additionally, some businesses are introducing their own tools to allow victims to report AI-generated sexual abuse on their platforms.

While today’s announcement from the White House doesn’t establish new legal consequences for companies that fail to meet their commitments, it is still an encouraging step. By fostering a cooperative effort, these AI companies are taking a stand against the misuse of their technologies.

For individuals who have been victims of non-consensual image sharing, support is available. Victims can file a case with StopNCII, and for those under 18, the National Center for Missing & Exploited Children (NCMEC) offers reporting options.

In this new digital landscape, addressing the ethical concerns surrounding AI’s role in image-based sexual abuse is critical. Although the voluntary nature of these commitments means there is no immediate accountability, the proactive approach by these companies offers hope for stronger protections in the future.

Source: engadget.com

Continue Reading
Advertisement kiiroo.com
Advertisement sexyjobs.com
Advertisement Xlovecam.com
Advertisement amateur.tv

Trending