Home » Health » Deepfake Pornography Investigation: French MP Targeted Before Elections

Deepfake Pornography Investigation: French MP Targeted Before Elections

by Dr. Jennifer Chen

The rise of artificial intelligence has introduced new challenges to the political landscape, extending beyond the spread of misinformation to include the creation of non-consensual, sexually explicit deepfakes. In France, authorities are grappling with this emerging threat, as evidenced by a recent investigation into a deepfake targeting a Member of Parliament and a broader probe into X, formerly known as Twitter, and its handling of such content.

On , French police raided the Paris offices of X as part of a preliminary investigation into the dissemination of child sexual abuse images, and deepfakes. This action coincided with the summoning of Elon Musk, the owner of X, for questioning. The investigation, which began in January of the previous year, is being conducted in coordination with Europol, the European Union’s law enforcement agency.

The investigation into X is multifaceted, initially sparked by concerns over alleged algorithm manipulation. It has since expanded to include allegations of “complicity” in the spread of pornographic images of minors, sexually explicit deepfakes, and the denial of crimes against humanity. Prosecutors have also raised concerns about X’s data privacy practices, alleging that the platform allows users to be targeted for advertising based on highly sensitive personal information, violating privacy regulations.

Adding another layer to the legal scrutiny, French authorities are also investigating allegations that X has facilitated the spread of Holocaust denial, which is illegal under French law. Both Musk and Linda Yaccarino, X’s former CEO, have been summoned to appear before prosecutors this spring in connection with the probe.

The recent case involving Claire Marais-Beuil, a deputy with the Rassemblement National (RN) and a candidate in the upcoming municipal elections in Beauvais, highlights the immediate and personal impact of these deepfakes. A video depicting a fabricated striptease of Marais-Beuil, generated using artificial intelligence from a photograph taken at the National Assembly, circulated online. Marais-Beuil filed a complaint in December after a constituent alerted her to the video’s existence.

The Beauvais public prosecutor’s office has opened an investigation for “dissemination, through an online public communication service, of a montage or content generated by algorithmic processing of a sexual nature reproducing the image or words of a non-consenting person.” As of now, the perpetrator(s) responsible for creating and distributing the deepfake remain unidentified.

Marais-Beuil expressed her distress over the incident, describing it as a “demeaning” process that attacks “the woman behind the politician.” She suspects the deepfake is linked to the municipal election, though she is unsure whether the perpetrator is an opponent or someone within her own party.

The broader context of the Beauvais municipal election involves a competitive field of candidates, including the current mayor, Franck Pia, and a broad left-wing alliance encompassing socialists, communists, and La France Insoumise (LFI). This heightened political competition may contribute to the motivation behind such malicious acts.

The French investigations into X are not isolated. The United Kingdom also launched an investigation into the platform on , shortly after the raid in Paris. These actions underscore the growing international concern over the potential for social media platforms to be exploited for the creation and dissemination of harmful deepfake content.

X’s Global Government Affairs department has dismissed the allegations as “baseless,” claiming that the investigation violates the platform’s rights to defend itself. Elon Musk has characterized the investigation as a “political attack,” though he has not provided supporting evidence.

The case of Claire Marais-Beuil and the broader investigation into X highlight the urgent need for effective strategies to combat the creation and spread of deepfakes, particularly those of a sexually explicit and non-consensual nature. The legal and ethical challenges posed by this technology are significant, requiring a coordinated response from law enforcement, technology companies, and policymakers.

The investigation into X also encompasses concerns about the platform’s AI bot, Grok, which reportedly generated an estimated 3 million sexualized images over 11 days, including 23,000 that appeared to depict children. This aspect of the investigation raises serious questions about the safeguards in place to prevent the misuse of AI technology for the creation and distribution of child sexual abuse material.

As AI technology continues to advance, the potential for malicious actors to exploit it for harmful purposes will likely increase. The French investigations serve as a stark reminder of the need for proactive measures to address this evolving threat and protect individuals from the devastating consequences of deepfake abuse.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.