The best way to regulate artificial intelligence? The EU’s AI Act

Maintaining the right balance between freedom and supervision, the AI Act will enhance the spread of an important new technology while ensuring its safety, argues Eva Maydell
Adobe stock

By Eva Maydell

Eva Maydell (BG, EPP) is the Industry, Research and Energy (ITRE) Committee rapporteur on the Artificial Intelligence Act (AI Act)

20 Apr 2022

@EvaMaydell

With the Artificial Intelligence Act (AI Act), we have – again – crossed the Rubicon. The die has been cast, there is no way back. We are setting standards for another industry that until now has been left mostly on its own, that has important social functions, and that is of central importance in the global tech rivalry. The European electorate was and still is quite united in demanding rules for digital players while maintaining easy digital access and a competitiveness for all things digital.

With the AI Act and other legislation currently under way in such fields as cybersecurity, data, crypto and chips, the European Union is finalizing what it began with the General Data Privacy Regulation (GDPR), the Digital Services Act (DSA) and the Digital Markets Act (DMA). It will surely not be the last time digital policy is undertaken in Brussels, and updates to these regulations are partly already necessary. But hopefully soon we will be able to say that we have dealt with the most pressing digital issues. This was the promise we gave to European citizens shocked by scandals, cyber-attacks and anti-democratic malfeasance.

I am certain that this regulation, along with the changes that we will propose in the coming months in the ITRE Committee, will enhance the spread of an important new technology while ensuring its safety, which should always be our main goal

As the Industry, Research and Energy (ITRE) Committee rapporteur, I welcome the European Commission’s proposal on an AI Act. Maintaining the right balance between freedom and supervision, it will bolster trust in the European AI industry. I am certain that this regulation, along with the changes that we will propose in the coming months in the ITRE Committee, will enhance the spread of an important new technology while ensuring its safety, which should always be our main goal.

Unfortunately, some are focusing on prohibiting AI by fear mongering. When I asked [Wikipedia whistleblower Frances] Haugen at her brave testimony, she was very clear: we don’t need bans, we need transparency and clear guidelines. No responsible political group wants to let these potentially powerful systems be used without strong safeguards. But prohibiting technology seldom works as anticipated. There are better ways to deal with this, and that is what the AI Act is doing, to a large extent.

As mentioned, there is much to appreciate in the proposal. First and foremost, the risk-based approach that calls for the prohibition of certain practices, specific requirements for high-risk AI systems, harmonised transparency rules for AI systems intended to interact with natural persons, and rules on market monitoring and surveillance would allow the development of AI systems in line with European values. 

The proposal by the European Commission, however, does not go far enough in helping companies compete in return for the many obligations expected from them. This applies especially to start-ups and SMEs – Europe’s most competitive and desired companies – and therefore undermines the legitimacy and relevance of the AI Act. We need to provide companies with clearer guidelines, simpler tools and more efficient resources to cope with regulation and to innovate.

I therefore will work to enhance measures supporting innovation, especially those helping start-ups and SMEs. I am especially worried that the current state of the regulatory sandboxes is too cumbersome, which defeats the purpose of this highly important tool in developing AI that works “on the ground”.

In addition, I will try to provide a clear and more concise definition of an artificial intelligence system with an emphasis on establishing clear oversight on how to change this definition in the future. Next, I want to set high but realistic standards for cybersecurity and data that allow for the best mix of safety and usability. Finally, I want to future-proof the AI Act. This means better linkages to the other parts of digital policy, to the green transition and to the international stage, as well as anticipating possible changes in the AI industry, AI technology and the power of AI.

I will try to provide a clear and more concise definition of an artificial intelligence system with an emphasis on establishing clear oversight on how to change this definition in the future

As we all know, actions have implications, and we need to be aware of those. Digital policy is as much politics as it is policy. Even if some see it that way, digital policy surely is not just a technocratic fix. 

Therefore, we need to see beyond the AI Act to consider how this policy impacts our important relationship to the United States, how it will affect our neighbourhood, especially the many internal and international conflicts, and how it could be a way to mend or sever our relations to China. 

International digital rules could at the same time bridge this current climate of mistrust with our rivals as well as forge a new alliance with democracies around the world. The AI Act – together with the Data Act and other regulations and policies – could help foster a democratic market and forum that would be our strongest defence against creeping nationalism and unfairness.

Finally, we should not make a mistake that the EU has made again and again: writing a law is important but implementing and enforcing it will be key. This means that the AI Act needs to be more than a just well-written piece of legislation: it requires a long-term commitment from the Member States, the Commission and the international community.

Read the most recent articles written by Eva Maydell - Future of Europe: The path to prosperity