Key Takeaways
Rep. Alexandria Ocasio-Cortez spearheads the DEFIANCE Act of 2024, targeting the spread of nonconsensual AI-generated explicit content.
This act aims to hold accountable those involved in creating, distributing, or receiving such materials.
The DEFIANCE Act of 2024 proposes amendments to the Violence Against Women Act, allowing victims to take legal action against perpetrators.
The legislation focuses on protecting individuals from unauthorized and harmful digital representations, emphasizing the importance of consent.
The proposed legislation amends the Violence Against Women Act (VAWA) and enables victims to sue those who produce, distribute, or receive deepfake pornography.
Congresswoman Alexandria Ocasio-Cortez stated in the press release:
“Victims of nonconsensual pornographic deepfakes have waited too long for federal legislation to hold perpetrators accountable. As deepfakes become easier to access and create — 96% of deepfake videos circulating online are nonconsensual pornography — Congress needs to act to show victims that they won’t be left behind.”
Co-led by Senators Dick Durbin and Lindsey Graham, the bill has garnered endorsement from over 25 organizations. It represents a bipartisan effort in Congress, highlighting the significant concern and need for regulation in this area.
Social media giants are being called to implement effective policies to combat the rising concern over deepfakes following a recent series of deepfake scandals involving celebrities like Taylor Swift, Bobbi Althoff, and Jenna Ortega.
These scandals have brought further attention to the issue of digital consent and the effectiveness of social media moderation.
In response to the spread of deepfake images of Taylor Swift, X implemented a broad measure by blocking all search terms associated with the artist. This decision followed a significant incident where AI-generated explicit content featuring Swift was accessible for 19 hours.
The platform’s drastic action has sparked a debate on the effectiveness and ethics of such moderation strategies. Critics argue that the response was more reactive than proactive, and demonstrated an inability to moderate offensive content appropriately.
Manual detection of deepFakes, is still, for the most part, possible. However, this is a labor-intensive task and requires appropriate training.
A Europol’s report cites a 2020 study by Sensity, an Amsterdam-based company that detects and tracks deepfakes online, found 85 047 deepfake videos on popular streaming websites, doubling every 6 months.
The report emphasized that law enforcement officers will need to develop new skills to not only upskill their workforce to detect deepfakes but also “invest in their technical capabilities to address the upcoming challenges effectively while respecting fundamental rights”.
Enforcement officers will need to make use of tested and proven methods when making audiovisual recordings as well as employ technical and organizational safeguards against tampering, to be able to prove the authenticity of the footage, the report adds.
The incidents involving high-profile individuals highlight the need for legislative measures like the DEFIANCE Act. As the technology becomes more accessible, the potential for misuse grows, making it imperative to establish clear legal frameworks to protect individuals from nonconsensual and deceptive digital representations.
While the DEFIANCE Act aims to offer a legal remedy for victims, the fight against digital exploitation starts by reinforcing the necessity of ethical standards in technology use, to avoid potential harm.