Connect with us

Tech & IT

Minnesota Takes the Lead: Deepfake Regulation Set to Materialize by May 22nd

The Minnesota State Senate voted to pass a bill intended to address the misuse of artificial intelligence (AI) generated video, images, and sound known as “deepfakes.” The often convincing yet fabricated media created with these technologies have caused alarm among ethicists, political observers, and others due to the potential for misuse of election manipulation and the nonconsensual distribution of sexual images.


If signed into law, the bill would criminalize the creation and distribution of false digital media without the consent of the person or persons depicted, including digital pornography. In addition, the bill provides for a felony penalty on the second offense of knowingly posting a pornographic deepfake to a website, disseminating it for profit, using it to harass a person, or if it’s a repeat offense within a five-year period.

Widespread support from both parties suggests that this bill may soon become law by May 22nd. Other states, such as California and Texas, have already enacted similar legislation as deepfake technology has become exponentially more accessible in recent years.

Sen. Erin Maye Quade, DFL-Apple Valley, a cosponsor of the bill, said, “Deepfake technology has the power to damage reputations, ruin lives, and even threaten the integrity of our democracy.” House bill sponsor Rep. Zack Stephenson, DFL-Coon Rapids, added that, “Minnesota already has a statute prohibiting revenge porn or the nonconsensual distribution of private sexual images, but this would not apply to pornographic deepfakes.”

The bill provides for the ability of victims of sexual deepfakes to sue the creators for damages and have the images taken down from the internet, as well as criminalizing the distribution of videos altered within 60 days of an election with the intent to injure a candidate or influence the outcome of the election.

Sen. Nathan Wesenberg, R-Little Falls, was the only Senator to vote against the bill, stating that he wanted to see higher civil fines included for deepfake offenses. With that said, it appears that this bill is soon to become law in Minnesota, setting an example for others to follow in order to combat the rising danger of deepfakes.

Tech & IT

Stripe Shuns Sex Work, Yet Gains from AI Non-Consensual Images

Stripe, a $50 billion payment processing giant known for avoiding the adult industry, is currently reaping profits from AI-produced non-consensual pornographic images.


Two platforms, CivitAI and Mage.Space, have been highlighted for exploiting text-to-image AI utilities to create explicit images, frequently of celebrities or other known figures. Mage.Space, for instance, requires subscription fees for this service.

Both CivitAI and Mage.Space process their transactions via Stripe, meaning with every transaction, Stripe receives a share. Notably, Stripe’s typical rate is 2.9% plus a flat $0.30 per transaction.

GoAskAlex, an adult performer, expressed concern about the irony in Stripe’s dealings. She emphasized the injustice of financial institutions capitalizing on non-consensual content, especially when they actively distance themselves from legitimate adult industry professionals. Hany Farid, a UC Berkeley professor, similarly questioned the ethics of online financial platforms supporting such services.

Mage’s co-founder, Gregory Hunkins, informed about their stand against non-consensual imagery. However, non-consensual images were still readily available on their platform. Multiple outreach attempts to Stripe for comments remained unanswered.

The inconsistency in Stripe’s policy becomes glaring when considering their clear stance against collaborating with “adult content and services.” They have explicitly listed the types of businesses they refuse, which includes explicit content and adult services.

Mike Stabile of the Free Speech Coalition expressed astonishment at Stripe’s seemingly contradictory actions, pointing out how several in the adult industry have been banned or denied services by Stripe. The current situation feels like an affront, especially when one considers that legitimate adult professionals are sidelined while AI platforms exploiting their likenesses profit.

Continue Reading

Tech & IT

X Corp. Faces 2,000 Arbitration Demands, Agrees to Negotiate

Following Elon Musk’s acquisition of Twitter in October 2022, X Corp. has entered discussions to address arbitration claims from nearly 2,000 laid-off employees.


Attorney Shannon Liss-Riordan, in a memo cited by Bloomberg, stated, “We have successfully brought Twitter to the negotiation table. Twitter seeks global mediation to settle all our filed claims.” Private mediation sessions are slated for December 1 and 2.

A source indicated to Bloomberg that X Corp. is acting in compliance with a court order to mediate. An August filing disclosed over 2,200 individual arbitration demands against Twitter, with potential filing fees nearing $3.5 million as reported by CNBC. X Corp. has pushed for an even distribution of the arbitration costs.

Previously, Twitter (now “X”) allegedly compelled ex-employees to opt for arbitration over lawsuits while declining arbitration costs. Liss-Riordan pursued multiple class actions, including one suggesting Twitter breached the federal and California Worker Adjustment and Retraining Notification (WARN) Acts, failing to provide 60 days’ notice before a mass layoff.

In January 2023, a federal judge mandated ex-employees into arbitration due to existing agreements. However, Twitter was accused in July of both insisting on arbitration and avoiding the associated costs.

The company also faces allegations of unpaid severance, discrimination, and WARN Act and FMLA violations. Liss-Riordan, representing the ex-employees, commented, “We are dedicated to ensuring they receive what’s due.”

Updates will follow upon receiving comments from Liss-Riordan and X Corp.

Source: Blomberg

Continue Reading

Tech & IT

Texas Age Verification Law for Adult Websites Blocked by Federal Judge

A day before it was set to be enforced, a Texas law mandating age verification and health warnings for pornographic websites was halted by U.S. District Judge David Ezra. He ruled in favor of the Free Speech Coalition, an association representing the adult entertainment industry, arguing that the law infringed on First Amendment rights and lacked clarity.


Judge Ezra criticized House Bill 1181, which was endorsed by Gov. Greg Abbott, stating that the age verification component unnecessarily restricted adults from accessing lawful adult content under the guise of shielding minors.

Furthermore, the judge voiced concerns about privacy. The law’s provision allowing age verification via government-issued ID could let the state government monitor and log users’ access to such websites. This poses risks, especially for those accessing LGBTQ content, given Texas’ ongoing controversial laws related to homosexual activities.

The ruling also highlighted concerns about potential leaks or breaches exposing personal data.

Another aspect of the law mandates warnings on porn sites about supposed psychological risks of viewing adult content. However, Judge Ezra pointed out a lack of evidence supporting the effectiveness of such warnings in restricting minors’ access. He further noted that these warnings were labeled “Texas Health and Human Services” without clear backing from the named institution.

It’s worth noting that Texas followed the steps of Louisiana and other states in proposing such legislation. However, there were gaps in the Texas law, such as excluding social media sites due to them likely not meeting the one-third sexually explicit content criteria. This loophole means minors could still access adult content on platforms like Reddit, Tumblr, and Instagram.

Summing up, Judge Ezra emphasized that while the law’s intent was to protect minors, it wasn’t adequately designed for that purpose and instead contained broad exemptions.

Continue Reading
Advertisement

Trending