Current state of EU legislation on AI
Legislative processes within the European Union may seem complex not only in terms of their form but also in terms of their timeframe. This is also true for the Artificial Intelligence Act (AI Act). To provide a clear explanation of the path of this legislation towards final approval, one needs to start with the first initiatives, follow the cycle of the legislative text from the first proposals, and place everything in a broader context.
The AI CE Institute
At Deloitte, we know that AI is a key business topic of the future. That’s why we bring together diverse expertise and knowledge from industry and academia under one roof. How? With The AI CE Institute. More information can be found on our website.
The first European AI initiative was established in 2018 with the creation of the High-Level Expert Group on Artificial Intelligence (AI HLEG) and the European AI Alliance. These two groups were created to shape European values for trustworthy AI and engage in a conversation about rules for safe AI use across Europe with academics, business representatives, citizens, and non-governmental organizations. A year later, Ethics guidelines for trustworthy AI were issued in accordance with the perspectives of all involved parties. As a result of these EU initiatives, the so-called White Paper on Artificial Intelligence: a European approach to excellence and trust and the Assessment List for Trustworthy Artificial Intelligence (ALTAI) were published, providing the foundation for the legislative text along with an online public consultation in 2020.
The final version will be the subject of difficult negotiations
Subsequently, the AI Act itself came into focus, which currently exists in three versions. Firstly, the Commission presented its version in April 2021, followed by the EU Council expressing its position last November, and most recently, the European Parliament proposed its amendments on May 11, 2023. The versions share a common foundation based on a risk-based approach (divided into unacceptable, high, limited, and minimal risks), yet those differ in many concepts. For example, the latest version from the European Parliament specifically addressed generative AI systems such as ChatGPT or Stable Diffusion. The European definition of AI itself also undergoes changes across the texts, requiring consensus among all participating parties. Currently, the legislation is in the phase of what is known as a trialogue, aimed at resolving the question of the final wording in order to ensure the text is up to date and addresses the risks associated with the use of artificial intelligence from all perspectives, including developers, scientists, AI experts, as well as protecting companies and end customers.
The trialogue is expected to be lengthy and demanding, and the final version will not be available immediately after its initiation. However, for practical reasons, the conclusive act is still anticipated to be adopted before the European Parliament elections scheduled for May 2024. Therefore, there is a real possibility that the final legislation will be accessible as early as next year, meaning the regulation itself will become effective 24 months after it enters into force.
Does the upcoming regulation of artificial intelligence affect your company? Do you want to use AI-based technologies or are you directly involved in their development? Deloitte offers comprehensive support to ensure your compliance with the AI Act. Our services include, for example, assessing your company’s maturity to adopt AI solutions and their impact, full model lifecycle implementation, creating and updating internal processes, supporting strategy development, employee training, and much more. For more information, visit our website.