What can the EU learn from the Dutch algorithm scandal?

Amid the growing use of artificial intelligence by governments, algorithmic bias threatens to undermine fairness and reinforce discrimination in public services.
Demonstrators in Amsterdam protest the Dutch childcare benefits scandal, May 2021.

By Linda A Thompson

Linda A Thompson is a Belgian journalist who writes on EU policy and legal activism

11 Oct 2024

thompsonbxl

In 2020, it was revealed that Dutch tax officials wrongfully targeted thousands of parents as tax cheats after racially profiling them. The government had manipulated an algorithm to flag individuals with a dual nationality and a "foreign-sounding name," using the criteria as indicators of alleged fraud in childcare-benefit claims. 

The fraud hunt left thousands of families in debt and poverty. Parents lost their homes as they tried to pay back tens of thousands of euros that tax officials unjustly tried to recoup. 

It became the biggest political scandal in the Netherlands in years, prompting the government to step down a year later. Then Prime Minister Mark Rutte called it a “dark page in the history of the Dutch government” in his cabinet’s resignation letter in January 2021. Despite the scandal, Rutte retained his post following fresh elections.  

“It’s the worst fear come true in this public use of algorithms,” Henrik Trasberg, a legal advisor on new technologies to Estonia’s justice ministry, said of the Dutch revelations. The algorithms “very clearly pushed what the public agency was doing [and] who they were conducting their investigations into,” he told The Parliament

For digital rights activists, the Dutch events serve as a cautionary tale for what can happen when public administrations implement algorithms without proper oversight and fundamental rights training. “There is a huge risk that comes with the use of AI by law enforcement or public authorities, especially in reinforcing existing discrimination,” said Chloé Berthélémy, a senior policy advisor at European Digital Rights, a Brussels-based NGO.  

Nonetheless, a July report from Dutch privacy watchdog AP found that public authorities – including municipalities, the police, the employee insurance agency and an education agency – continued to employ discriminatory algorithms throughout 2023.  

“The benefits scandal has indeed shocked everyone, but bitterly little has changed,” the watchdog reported. 

A spokesperson for the Dutch government told The Parliament it has introduced several policies and instruments to avoid the use of discriminatory algorithms, educate public sector entities, and encourage the responsible use of algorithms and AI systems. The Dutch privacy watchdog “views these developments positively,” he said, but acknowledged the body “remains critical of the public sector’s progress.”  

With its potential for improving decision making and boosting efficiency against a background of shrinking public budgets and a perennial pressure to do more with less, AI boasts tremendous appeal for the public sector – and is being rapidly embraced by governments at every level across the EU.  

A 2022 study by the European Commission documented 686 cases of public sector use of AI in the 27 EU member states, along with Ukraine, Switzerland, Moldova and Norway. The report noted that government’s embrace of AI would likely increase in the coming years, but also described AI expertise and competence within public agencies as “low.” 

“Every government in the EU, including the Estonians – we see AI as a very, very important solution [to] our political ambitions,” Trasberg said. But he added: “Public sector use of AI has to follow fundamental rights, and we have to ensure that it's sufficiently transparent and that there is accountability.” 

AI’s vast potential is mirrored by the discriminatory effects it can produce. The Dutch scandal was an outlier, with civil servants themselves manipulating the algorithm so that low-income parents, single parents and parents with a foreign nationality received a higher fraud risk score from 2016 onward, according to a 2020 parliamentary report.  

The scandal followed a 2013 benefits scam that had cost the Dutch government around €4 million, putting tax officials under intense pressure from lawmakers to crack down on fraud in the years following. More commonly, AI systems involuntarily discriminate because they’re trained on datasets that embed the racial or gender assumptions of their developers and the clients that commissioned the software, experts say.  

“Technology is not neutral. It is not conceived immaculately,” said Berthélémy, citing predictive policing as an example, whereby algorithms are trained on vast amounts of historical data to predict and help prevent potential future crimes. The practice, she said, “targets communities already marginalised and mostly racialised ones.” 

The human toll of AI  

Four years later, the Dutch scandal stands out for the deep human toll it wrought. But public agencies in other countries – including Denmark, France, Spain and Serbiacontinue to use AI systems to target alleged benefit fraudsters with algorithms that appear to disproportionately flag certain groups, according to local NGOs and a 2023 international investigation by Lighthouse Reports, a not-for-profit collaborative newsroom organisation.  

In Denmark and France, algorithms were written to assign a higher risk factor based on nationality or low income, while in Serbia “unrepresentative data” is reinforcing existing discrimination, according to Amnesty International. 

“You can exactly see the same trend in all those countries where people from marginalised communities, people with lower incomes, single mothers, Roma communities, are much more targeted by those kinds of so-called fraud-hunting or fraud-detection algorithms,” said Berthélémy. “The difference with the Netherlands, and I will grant them this, is that the impact in the Netherlands has been much more documented,” she added.  

There have been no such investigations by a public authority or official reports in other EU countries that have used fraud-risk scoring technologies, nor have local public agencies acknowledged engaging in any algorithmic discrimination.  

Meanwhile, in 2020, Dutch officials conceded that institutional racism had played a major role in the child-benefits scandal. A blistering 60-page report issued by the country’s privacy watchdog showed that tax officials had engaged in discriminatory and illegal behaviour by retaining information on the nationalities of 1.4 million parents. A separate 2023 report by the country’s statistics office found that 71% of the parents accused of fraud were first or second-generation immigrants, 44% of which came from the country’s lowest-income households.  

“A big problem around algorithmic discrimination and discrimination in general is its invisibility,” said Raphaële Xenidis, an assistant professor at Sciences Po in France, whose research focuses on European discrimination and equality law.  

Limits to the AI Act  

Adopted in March, the AI Act is the EU’s attempt to put guardrails around the nascent technology and chart a trailblazing and distinctly European regulatory approach that centres on “trustworthy” and “human-centric” AI. But, two weeks later, a coalition of a dozen NGOs condemned the EU rulebook for not containing sufficient safeguards to deter public entities from engaging in AI-fuelled discrimination.  

The AI Act contains a significant carve-out for law enforcement and migration authorities, allowing them to use so-called high-risk AI systems, like facial recognition, without public disclosure. Unlike private companies, these authorities won’t have to reveal the results of mandatory risk assessments concerning the technology’s impact on EU fundamental rights, such as justice, dignity and equality.  

For its part, a spokesperson for the European Commission said the AI act requires law enforcement agencies to conduct “comprehensive assessments” on how activities impact fundamental rights to ensure any potential risks are effectively managed. 

Sergey Lagodinsky, a German MEP for the Greens, told The Parliament that lawmakers were at least able to draw attention to the risks of the technology in the lead-up to the passage of the AI Act. “That's about all we can do at this point,” he said, adding that many MEPs would have preferred more bans on technologies such as biometric mass surveillance, and fewer exemptions.  

“But we were realistic in our expectations. We knew that if we're sitting there across from 27 interior ministries, 27 governments, we would not be able to get a 100% parliamentary position through.” EU member countries have underlined that national security is their domain.  

The public good?  

For Lagodinsky, society is at a critical juncture when it comes to the regulation of AI and the adoption of safeguards that protect against algorithmic discrimination. “I’m not a kind of Black Mirror legislator. I don’t think the machines and robots are going to control us,” he said, referring to the dystopian, technology-focused TV show. But he added: “There is the potential for developments that could get out of hand, and we don’t want that.”  

Experts like Xenidis expect more scandals like the Dutch one in the coming years. While awareness of bias and discrimination in AI systems is maturing among companies and public entities, she believes that discrimination is ingrained in our social fabric and will inevitably be reproduced by AI tools. “I don't know to what extent it will happen again, but it will surely happen again,” she said.  

Xenidis also argued that placing the blame for automated discrimination at the doorstep of public servants who wield the software overlooks broader societal dynamics, including a lack of awareness around discrimination and a widespread belief that technology is neutral.  

“From the trainings I've done, the people I've spoken with, these are people who very often believe in the public good and that's what's making algorithmic discrimination even more vicious in a way.” 

Read the most recent articles written by Linda A Thompson - EU AMA: Who is best (and worst) at implementing EU rules? 

Related articles