Home / News / Technology / Giorgia Meloni to Testify Against Deepfake Offenders, Demands $108K
Technology
4 min read

Giorgia Meloni to Testify Against Deepfake Offenders, Demands $108K

Published March 25, 2024 2:10 PM
Giuseppe Ciccomascolo
Published March 25, 2024 2:10 PM

Key Takeaways

  • Italian Prime Minister Giorgia Meloni will testify in a case involving deepfake activities in Italy.
  • A hearing before Judge Monia Adami has been set for July.
  • Meloni is not the first celebrity deepfake victim.

Italian Prime Minister Giorgia Meloni was summoned to testify in the Sassari court as the aggrieved party in a case involving fabricated pornographic videos superimposing the face of the leader onto actresses.

Defamation proceedings are currently underway in Sardinia against two individuals, a father and son duo, who stand accused of disseminating manipulated videos on a US-based website back in 2020.

The Case

The face may be that of Giorgia Meloni, but the body is not hers. Deepfake pornographic videos with the Prime Minister have garnered millions of views, which has led to the Italian leader going to trial to seek compensation.

Allegedly crafted by a father-son duo from Sassari, aged 73 and 40, respectively, the videos were uploaded online in 2020. The defendants are charged with defamation.

Meloni seeks restitution of damages totaling $108,000. The government leader said she would donate the sum to the national fund of the Ministry of the Interior to aid women affected by violence. “This plea serves as a message to all women enduring such abuses, urging them not to hesitate in reporting,” elucidated  Meloni’s lawyer, Maria Giulia Marongiu. “The sum is emblematic, aimed at bolstering the safeguarding of victims—women who, often unwittingly, fall prey to such offenses.

Subsequently, a hearing before Judge Monia Adami has been set for July 2, 2024. According to her lawyers, the Prime Minister’s testimony is necessary. And the judge has ordered it, agreeing on the date with the offended party.

Meloni Not The First Victim

Michelle Obama, Scarlett Johansson, and Emma Watson have fallen victim to this disturbing trend. The process of creating deepfake porn is shockingly straightforward. Users can download a facial photo and seamlessly affix it to the body of a pornographic actor.

In 2019, DeepTrace, a Dutch company dedicated to monitoring synthetic media online, revealed  that a staggering 96% of deepfake content circulating on the internet is of a pornographic nature. February 2023 marked a peak month for the production and dissemination of deepfake pornographic material.

This unsettling phenomenon has steadily gained momentum. It doesn’t just ensnare prime ministers and actresses. It preys upon ordinary women and girls who become unwitting victims of revenge porn or sextortion. In early September 2023, 20 girls from Almendralejo, a Spanish municipality in Badajoz, were horrified to discover  that their peers had utilized software powered by artificial intelligence (AI) to fabricate deep nude images of them.

Similar incidents have unfolded across Europe with the emergence of the BikiniOff app. This app is capable of virtually disrobing individuals based on a photograph. This spurred controversy when two individuals exploited it to create fake nude images of their classmates under the guise of a ‘joke,’ shedding light on yet another facet of the deepfake epidemic.

Social Platforms Try To Tackle Deepfakes

In January 2020, Meta announced  its stance against manipulated content proliferating across its platforms. It was a direct response to the escalating prevalence of deepfakes online. Emphasizing its commitment to safeguarding users, Meta vowed to remove any harmful AI-altered content promptly.

Meanwhile, YouTube has rolled out a novel tool  to identify AI-generated content. This empowers creators to flag videos containing elements synthesized through this technology voluntarily. The innovative feature necessitates creators to transparently mark ‘altered’ content during uploading and publishing. Even if it closely resembles reality.

“The proliferation of deepfakes not only poses grave concerns for online privacy and security but also prompts ethical inquiries regarding the responsible utilization of such technology,” an AI expert told CCN, requesting anonymity.

“As AI continues to advance, it becomes imperative to deliberate on its ethical implications and ensure its judicious application to prevent exploitation and harm.”

“In essence, the Meloni case underscores the pressing need to confront the menace of deepfakes head-on. And we need to implement robust measures to safeguard individuals’ privacy and security in the digital realm,” the expert added.

Was this Article helpful? Yes No