The European Parliament’s Committee on Industry, Research and Energy (ITRE) recently voted in favour of Eva Maydell MEP’s (BG, EPP) opinion on the Artificial Intelligence (AI) Act. The opinion received a strong endorsement from the committee with 61 votes in favour and only two against. Seven parliamentary committees are examining the proposal and Member States are still aligning their positions; but as the French pass the EU Presidency baton to the Czechs, EU40 - the platform of young pro-European MEPs - organised a multi-stakeholder discussion with leading actors and experts in the Microsoft Centre to take stock of where we are and to assess whether the EU is striking the right balance.
The European Commission unveiled its proposal in April 2021 with the aim of turning Europe into a global hub for trustworthy AI. The proposal is the first of a kind. Vestager’s Executive Vice-President Werner Stengg, principal adviser on AI, said: “When we drafted our white paper it was literally a white paper. We had been working on the issues in the OECD and with others, but it hadn’t been done before and we would be the first ones in setting global standards.”
Moderator Alessandro Da Rold, managing director at EU40, asked Stengg how the Commission had reached a balance between strict protection of fundamental rights and the promotion of innovation. Stengg said that the fact that there was a lot of debate within the parliament about which committee should even take a lead on the report was a manifestation of the Act’s reach. However, he said for the Commission it was about innovation versus trust.
“We wanted to unfold the potential of these technologies for industry and society, but address the risks,” said Stengg. “By addressing this in a uniform way, providing trust, legal certainty, by getting the definitions right, we give developers in Europe confidence to develop solutions within the Single Market. And then it’s “go, go, go!” as my boss [Vestager] would say.”
Maydell said that the eyes of the world were on Europe: “While we think that AI is too important to be left unregulated, it is even more important that Europe regulates well.” She said that the debate had focused on controversial issues like the use of biometric facial recognition, which while important, is only 5% of the whole story. She wanted to put the spotlight on the other 95% of AI developments.
By providing trust and legal certainty, by getting the definitions right, we give developers in Europe confidence to develop solutions within the Single Market. And then it’s “go, go, go!
Maydell said her opinion focused on the EU’s exclusive or shared competencies and placed a particularly strong emphasis on the involvement of SMEs and start-ups, for example, she has proposed the establishment of an EU AI Regulatory Sandboxing Programme for a ‘compliance-by-design’ approach. She said that AI systems specifically developed for the purpose of research would be outside the Act’s scope. She underlined that for wider adoption of these technologies, both businesses and citizens need to have confidence in the systems value-chain responsibility is needed.
Cornelia Kutterer, Senior Director at Microsoft for European Government Affairs, welcomed Maydell’s opinion. “When the President of the Ursula von der Leyen announced that the Commission would come forward with a proposal in a 100 days, we wondered what this would actually look like?”
Microsoft started to reflect on what was necessary and developed its own AI governance model that set standards for engineers to sales people working with customers to approach sensitive issues. Microsoft identified three areas: fundamental and human rights; the risks of physical or psychological harm; and, risks of serious effects and impacts on people’s lives. These turned out to be similar to the Commission’s thinking.
Microsoft then looked at how to operationalize these principles. “Our engineers were really interested, but open questions like: Does your AI system have an impact on fundamental rights, didn’t really help. So what we did is we changed the principles into outcomes, that made it easier for engineers to find solutions,” said Ketterer. “Let’s say, the principle of fairness, you could stipulate that you wanted the same quality of service for different demographic groups. That’s an outcome that’s understood and that engineers can try to solve. So we’re helping design tools to advance these principles.”
Senior Director at Johnson & Johnson for Digital Health and Chair of the MedTech Europe digital health committee Angel Martin brought insights from his sector: "We need to strike a balance between what is a good horizontal piece of legislation and the wider ecosystem. For example, we work with the EU’s Medical Devices Regulation. We need to avoid duplication and confusion between the AI Act and this regulation, or we will deter innovation.”
Martin considers that AI is still in its infancy and underlines that it is not a panacea. He gave examples of how it can be used to help in clinical trials and improve the outcome for patients in the operating room: “We can see that sometimes, even with the same surgeon, that there is a variation of outcome for patients. Using AI we can optimise outcomes and help a surgeon to make better decisions. So there are great opportunities, but there are also limitations.”
The panel faced a number of questions from a lively audience; issues from how to improve communication and understanding around AI, to how the enforcement of the act’s provisions would work in practice were raised.
The Czech Presidency has already forwarded a paper isolating the main issues to be resolved and the Parliament is hoping that their report could be adopted in the autumn. Maydell urged all actors to understand the Act in its wider geopolitical context: “I think it's very clear that we cannot talk about tech, without geopolitics. We cannot ignore the true power struggle that we have between democracies and autocracies. As we sit in the upcoming negotiations, I hope we can truly be able to keep this bigger picture in mind.”