Will the Artificial Intelligence Act protect the rights of Europeans?

The European Parliament and Council are entering a period of intense debate on the AI Act with the aim of agreeing on a position by the end of the year. Dods EU Political Intelligence examines how lawmakers plan to address the risks AI could pose to the rights of Europeans.
Brando Benifei at a 2020 European Parliament plenary session on artificial intelligence | Photo: EC - Audiovisual Service

By Lucie Schniderova

Lucie Schniderova was previously a consultant on digital policy at Dods EU Political Intelligence

21 Sep 2022

dodseupolintel

Artificial intelligence promises to help drive development and improve efficiency in myriad fields and social activities. However, with great power comes great responsibility: in this case, responsibility to mitigate the risks for individuals and society. 

The European Commission’s AI Act proposal, the first comprehensive legal framework for the technology globally, classifies AI applications by risk and regulates them accordingly, with a focus on the impact on people's safety and fundamental rights.

The new rules cover both high-risk AI systems in products that are already regulated, such as cars and medical devices, and those used in specific areas, such as biometric identification and education. Some AI applications deemed to pose “unacceptable” risks will be prohibited, such as the social credit system used in China, but low-risk applications will remain unregulated.

Difficult negotiations are expected in the European Parliament and the Council, as lawmakers will need to balance supporting innovation and protecting the rights of Europeans.

“No” to invasive surveillance

One of the key issues is how to regulate the use of AI for monitoring individuals. In the European Parliament, the Committee on Civil Liberties, Justice and Home Affairs (LIBE) and the Committee on Internal Market and Consumer Protection (IMCO) have been leading the work on how to amend the Commission’s proposal to ensure Europeans are protected against the risks of threats to privacy and freedom of expression. In a draft report published in the end of April, co-rapporteurs Brando Benifei (IT, S&D) and Dragoş Tudorache (RO, RE) suggested tougher regulation of risky AI.

Some AI applications deemed to pose “unacceptable” risks will be prohibited, such as the social credit system used in China, but low-risk applications will remain unregulated.

While most MEPs agree on the need for better protection against surveillance and protection of privacy, some political groups have argued that high-risk AI applications in this area should be banned outright.

MEPs have said they want to extend the prohibition of remote biometric identification systems in public spaces. The socialists, liberals, Greens and some conservatives have argued that systems which track and categorise people are too intrusive and carry an unacceptable risk of abuse. Benifei has argued for a blanket ban, with no exceptions.

The S&Ds and the Greens also want to ban emotional recognition technology – which claims to be able to detect how people feel – because of its potential for harming freedom of expression. While the socialists want to ensure AI cannot be used to monitor people in the workplace without workers’ consent, the Greens have argued that constant monitoring of workers should be prohibited.

However, Tudorache and some conservative MEPs have warned that the list of prohibited practices in the legislation should not be too long.

Civil society reaction

Parliament’s draft report and amendments reflect calls from a range of civil society organisations to better protect fundamental rights. The co-rapporteurs introduced an impact assessment on fundamental rights, a crucial element that most political groups felt was missing in the Commission’s proposal. Benifei has argued that users of high-risk AI systems must assess the risks they pose to fundamental rights and plan how to mitigate them.

Academics, including a group of researchers from the University of Cambridge, have also stressed that the regulation should require stronger evaluation of broader societal harms.

Civil society groups have welcomed the fact that the draft report extends the list of high-risk applications to cover AI systems designed to interact with children or able to influence democratic processes. They have also welcomed a ban of systems to predict future criminal activity on the grounds that they undermine the presumption of innocence and could be abused. The Greens have echoed most of these concerns in amendments, including the possibility for the Commission to periodically update the risk categories.

“It's positive that amendments have been tabled to prohibit harmful use cases, require those using risky AI to perform accountability measures, such as fundamental rights impact assessments, and enable people affected to complain if their rights have been violated,” Sarah Chander, Senior Policy Adviser at European Digital Rights (EDRi), told Parliament Magazine in a statement.

However, some Member States including the Netherlands, Finland and Denmark have said the high-risk categories are too broad. The Czech Presidency of the Council therefore suggested removing some AI applications from the high-risk category, including biometric categorisation.

Protection versus innovation

While some lawmakers and groups see the AI Act as an opportunity to indicate what aspects of AI are undesirable, tech industry representatives and pro-business politicians have warned that heavy-handed regulation could stifle innovation. The Computer & Communications Industry Association (CCIA) has criticised that some MEPs want mandatory third-party audits before risky AI can be put on the EU market.

“Such a bureaucratic barrier would significantly delay and block many AI applications from ever benefiting Europeans,” Christian Borggreen, Head of CCIA Europe, told Parliament Magazine in a statement, adding that the EU should also limit prohibited AI practices to extreme cases clearly contrary to EU principles and fundamental rights.

“Such a bureaucratic barrier would significantly delay and block many AI applications from ever benefiting Europeans”

Some Member States have also voiced concerns the regulation could capture AI systems that do not represent a serious risk of violating fundamental rights. The Czech Presidency has suggested that AI could only be classified as high risk if it is immediately effective without human review or if its output plays an important role in human decision-making. Conservative MEPs have also stressed that the legislation needs to create the right environment for the development and uptake of AI in Europe.

According to Chander, the priority of the Act is to make sure innovation in AI actually benefits people and society, rather than technology companies alone. “Our human rights cannot be subordinated to fear-mongering around innovation,” she said.

While there are issues to contend with, the co-legislators gave themselves an ambitious timeline. If the Parliament's leading committees adopt a position by the end of October as scheduled, a plenary vote will follow in November. In the Council, the Czech Presidency is aiming to reach a general approach by the end of 2022 to pave the way for interinstitutional negotiations.

Read the most recent articles written by Lucie Schniderova - Digital Services Act: MEPs are ready to defend tougher regulations