New AI deepfake and Non-consensual bill - what are the impacts?
Looking at it from an angle of safety, it looks great. A federal bill that punishes non-consensual sexual content and sexual non-consensual AI deepfake content should help out with protecting people.
Here's some outlines of it - Platform Responsibilities: Social media companies and websites are required to remove flagged nonconsensual intimate content within 48 hours of a victim's request and take steps to eliminate duplicates.
Enforcement: The Federal Trade Commission (FTC) is tasked with enforcing compliance, with penalties including fines up to $50,000 per violation.
WIRED
Penalties for Offenders: Individuals found guilty under this law may face up to three years in prison, fines, or both
While the law aims to protect victims of digital exploitation, some digital rights organizations have raised concerns:
Potential for Overreach: Critics argue that the law's broad language could lead to the removal of legitimate content, including legal pornography and LGBTQ+ materials.
Lack of Safeguards: There are worries about the absence of a robust appeals process for content removals and the potential for misuse by individuals submitting fraudulent takedown requests.
Impact on Free Speech: Organizations like the Electronic Frontier Foundation caution that the law might suppress legitimate expression and threaten privacy tools like encryption.
What do we think of this bill? I like that it helps people out, but it might have a lot of frivolous cases being submitted which may stifle marketing and posting content on platforms like reddit and twitter. I just had my twitter account of 1.1 million followers suspended just a day after this bill came into law, all of the people in my videos and content have signed paperwork, 2257 documentation and more yet within 48 hours of a report my content is taken down.
|