Surge in Fake News and AI-Driven Disinformation Threatens European Union Elections

Voters in the European Union are facing a significant challenge as they prepare to elect lawmakers for the bloc’s parliament. The democratic exercise, set to take place from Thursday to Sunday, is marred by the looming threat of online disinformation and the potential amplification of fake news through the use of artificial intelligence (AI). Experts have observed a surge in the quantity and quality of false content and anti-EU disinformation leading up to the election, raising concerns about the ease with which voters can be deceived.

The spread of disinformation is not limited to domestic sources but also involves international actors, with Russia being widely blamed, although direct evidence linking such attacks is difficult to establish. Joseph Borrell, the EU’s foreign policy chief, has warned that Russia’s state-sponsored campaigns aim to flood the EU information space with deceptive content, posing a threat to democratic debates, particularly during election times. Borrell highlighted the exploitation of social media penetration and the use of cheap AI-assisted operations by Russia to manipulate information and push smear campaigns against European political leaders critical of President Vladimir Putin.

Instances of election-related disinformation have already been observed in various EU member countries. In Spain, a fake website was registered just two days before national elections, mimicking an official site and falsely warning of a potential attack by the disbanded Basque militant separatist group ETA. Similarly, in Poland, a bogus bomb threat was reported at a polling station just days before the parliamentary election, with social media accounts linked to Russian interference spreading false claims of an explosion. AI-generated audio recordings impersonating a candidate discussing election rigging also circulated on social media, requiring fact-checkers to debunk them.

The impact of disinformation campaigns extends beyond disrupting elections. Experts and analysts warn that these campaigns aim to erode societal trust, fuel public discontent with political elites, divide communities over issues like family values and gender, sow doubts about climate change, and chip away at Western support for Ukraine. The rise of generative AI technology has made it easier for malicious actors to create authentic-looking deepfake images, videos, and audio, making it increasingly challenging for disinformation watchers to debunk fabricated content.

In response to these threats, the EU has implemented the Digital Services Act, a comprehensive law that holds platforms accountable for spreading disinformation and imposes hefty fines. The law is being used to demand information from tech companies like Microsoft, Facebook, and Instagram owner Meta Platforms regarding the risks posed by AI chatbots and their efforts to protect users from disinformation campaigns. While the EU has also passed an artificial intelligence law requiring the labeling of deepfakes, it will not be in effect for the upcoming elections.

Tech companies, including Meta Platforms and TikTok, have pledged measures to protect election integrity, such as setting up election operations centers, employing content reviewers, and using AI to combat abuse. However, concerns remain about the systemic use of generative AI tools to disrupt elections.