(Dis)information wars: Battling fake news ahead of the EU elections

In the run-up to the European Parliament elections next month, there’s mounting concern that disinformation could sway the outcome. Is the EU ready for the challenge?

By Sarah Schug

Sarah is a staff writer for The Parliament with a focus on art, culture, and human rights.

16 May 2024

Fake videos of a minister pledging to take away minorities’ rights. Others showing film stars criticising the prime minister. Arrests of opposition party staffers, who deny being behind it. Hundreds of millions of internet users not knowing what they can trust. 

This has all taken place during India’s general election, the world’s largest ever, in which voting is still under way. EU officials are rushing to prevent something similar happening in the European Parliament elections next month – but experts warn that some disinformation is bound to get through regardless. 

Disinformation – strategic deception with the goal to confuse and overwhelm – has been around for thousands of years. Around 44BC, Augustus, the first Roman emperor, carried out a propaganda campaign against his rival, Mark Antony, including slogans etched on to coins smearing his reputation. 

And the invention of the printing press around 1440, which irrevocably changed production and access to printed matter, fuelled fears that sound familiar today: “Nowadays the innumerable crowd of printers causes confusion everywhere,” wrote Dutch scholar Erasmus in 1508. 

More recently, in 2016, Russian media in Germany reported a fabricated story about a teenage girl being raped by migrants, which went viral on social media and led to street protests and violence targeting refugees. 

Since then, technology has advanced exponentially, with AI-generated photos, videos and recordings often indistinguishable from the real thing to the untrained eye. That could have a range of harmful effects on democracies in this bumper global election year, including the European Parliament elections from 6 to 9 June. 

Michael Meyer-Resende, the executive director of Berlin-based NGO Democracy Reporting International, points to a fake audio recording that purported to show a candidate conspiring with a journalist to rig the vote, causing turmoil just before national elections in Slovakia late last year. 

“AI-generated content can create a lot of confusion because properly debunking it takes time,” he says. “It creates something that we call the liar’s dividend. Suddenly people even question true stories and suspect manipulation.” 

A study from August 2023 by the German Bertelsmann Foundation revealed that more than half of European citizens doubt the correctness of information on the internet, while a World Economic Forum report from earlier this year identified AI-powered misinformation as the world’s biggest short-term threat. 

In this context, the European elections next month will be “a test of our systems,” EP President Roberta Metsola said in March. 

New tools  

The newest and most potent of those systems is the Digital Services Act (DSA), in force since August last year, which holds platforms accountable for any disinformation they host – as well as online hate speech, counterfeit products and more.  

The DSA focuses on large ‘gatekeeper’ platforms with more than 45 million monthly active users in the EU, including Instagram, YouTube and X. They are obliged to submit regular risk reports; enable users to easily flag illegal content and notify them of their decisions; and field data requests from researchers and inquiries from the European Commission. 

The Commission can fine these companies up to six per cent of their global revenue if they’re found to be in breach of the legislation. It has open investigations against X, comprising various elements including content moderation; and against TikTok, with a focus on protecting children’s data.  

“It has teeth to enforce the provisions, and the commissioner [Thierry Breton] will certainly not hesitate to demonstrate that,” Johannes Bahrke, Commission spokesperson on the digital economy, tells The Parliament

The EU executive is now preparing to deploy the DSA specifically to safeguard next month’s elections. On 24 April, it held an election stress test with all the digital services coordinators (DSCs) – agencies designated by each EU member state to help governments coordinate among themselves and with the Commission. 

“In our community, there’s the feeling that we have very good instruments now,” says Meyer-Resende. But he expresses caution about bringing everything together so quickly: “It would have been great if we’d had all these things a year ago,” he adds. 

Need for balance 

While the DSA is a gamechanger, there are limits to the EU’s powers. As country-by-country studies by NGO EU DisinfoLab reveal, disinformation is often spread in local languages and tailored to historical and cultural specifics. Accordingly, a lot of work needs to be done at the national level, says Peter Stano, Commission spokesperson for foreign affairs and security policy. 

“We can raise the alarm, but the firefighting has to be done by the member states,” he says. “They are at the front line.” 

Another challenge is to balance anti-disinformation measures against the need to safeguard freedom of expression – and authorities could be sued if they get it wrong. After the EU sanctioned Russian media outlets in 2022 following the invasion of Ukraine, a Dutch journalist organisation filed a lawsuit at the EU Court of Justice, arguing the ban violated European citizens’ freedom of information.  

“We can’t be fighting fire with fire – then we’d damage our principles,” says Stano. “We need to find other ways.” 

These other methods are part of the Commission’s disinformation policy and focus on a “pedagogic” approach, Stano says. They include continuous awareness-raising and education in media literacy through school curriculums, civil society stakeholders, and independent fact-checkers. 

“We push member states to implement these measures and, if they don’t do it, then we try to work with civil society stakeholders or independent fact-checkers instead,” Stano says. “It’s a mid to long-term effort, but hopefully it will eventually work.” 

Meyer-Resende believes that, in the longer term, technology could solve the problems it has caused. As fabrication has become automated, so too should detection and flagging of fake materials, he says, with human fact-checking as “the last line of defence.”  

But developing these new technologies takes time, and new threats are constantly emerging. Currently, platforms are still having difficulty identifying synthetic audio and video. “There is innovation on the side of disinformation generation, and there is innovation on the side of disinformation detection,” he says.  

“It’s an arms race.” 

Read the most recent articles written by Sarah Schug - Stranger than fiction: How sci-fi inspires NATO’s long-term planning