AI Act: Europe at the forefront of AI regulation

The ECR Group’s shadow rapporteur on the Artificial Intelligence Act outlines the framework’s key aims and calls for global co-operation in creating safe and trustworthy AI

By Kosma Zlotowski

Kosma Zlotowski (ECR, PL).

05 Apr 2024

@KosmaZlotowski

Artificial intelligence (AI) has invaded all aspects of our lives, opening the door to unprecedented ways of using technology to increase productivity and automate many tasks. In order to fully exploit the innovative potential of AI, we need to minimise the risk of loss of control and the occurrence of systematic errors, as well as addressing ethical and privacy challenges. That is why we need legislation that gives us the tools to address these risks, but with the understanding that the development and application of AI can bring far more benefits than problems. It was with this conviction that the European Conservatives and Reformists (ECR) Group approached the work on the AI Act. We wanted it to be a roadmap for the development of safe artificial intelligence, not another bureaucratic barrier.

The AI Act is the first comprehensive document of its kind and puts the European Union at the forefront of AI regulation. In an ideal world, regulation should be one step ahead of technology, but who could have imagined just a few years ago that algorithms would replace doctors and judges, and ChatGPT would become the most famous 'author' of our time? Given the current pace of progress, we need rules that are detailed yet universal, and that apply to future innovations. In addition, the debate on AI crosses ethical, legal, economic and military lines, where EU regulations already exist. We have had to ensure that new rules are not mutually exclusive and do not duplicate those already in place.

Given these difficulties, the work on the AI law has been a long and arduous process. For me, as ECR shadow rapporteur, the most important thing was to strike the right balance between the size of the company and the new obligations. We fought for our amendments to reduce the burden on SMEs right up to the final negotiations. We were able to defend our demands regarding support for innovation, the functioning of regulatory sandboxes, the protection of intellectual property and exemptions for the research phase.

We wanted [the act] to be a roadmap for the development of safe artificial intelligence, not another bureaucratic barrier

In our view, the agreed text is the only compromise that could be reached, but this does not mean that we blindly agree with everything in the document. We have concerns, including an excessive focus on the risks posed by artificial intelligence systems. After all, one of the priorities of the AI Act is to keep users safe, but instead of providing a sense of protection, we are building an image of AI as a high-risk technology that consumers may be afraid to use.

Regulating AI was one of the most important tasks of the European Parliament's closing term. The AI Act is a set of new obligations that will take time to implement at national level, so realistically it will be a few years before we can talk about its impact and objectively assess whether we have struck the right balance between protecting consumers and creating favourable conditions for the development of AI.  But now, we need to make sure that we have strong global partnerships and global rules for creating trustworthy, safe artificial intelligence. Otherwise, like the EU's climate policy, we will become a lonely island in the next field, where everyone around us wins and Europe itself loses.