Generative AI and AI Act
The introduction of the ChatGPT model at the end of 2022 marked a remarkable technological milestone – the deployment of generative artificial intelligence. This represents an attempt to develop what is known a…
The AI Act, or the Artificial Intelligence Act, focuses primarily on the category of high-risk AI systems, as their inadequate use could endanger health, safety, or human rights. All three legislative bodies of the EU are aware of this and aspire to describe and address this issue as effectively as possible.
The AI CE InstituteAt Deloitte, we know that AI is a key business topic of the future. That’s why we bring together diverse expertise and knowledge from industry and academia under one roof. How? With The AI CE Institute. More information can be found on our website.
The Commission’s proposal divides high-risk AI systems into two categories. The first category includes AI systems used in products covered by the European Parliament’s and Council’s joint Directive on general product safety. Such products include toys, the aero, space and automotive industries, medical devices, and even intelligent elevators utilizing AI for efficient operation. It is evident that these products could have fatal consequences for their users in case of malfunction. The Directive explicitly states that manufacturers are obliged to place only safe products on the market, thereby ensuring certain limitations on the use of AI.
The second category consists of AI systems used in potentially risky areas defined by EU:
All high-risk AI systems will need to meet various demanding requirements before being introduced to the EU market. The first step will be their registration in a database managed by the European Commission. Providers will be required to establish risk assessment mechanisms, demonstrate high-quality data, prepare detailed system documentation, and thoroughly monitor outputs. Additionally, human oversight over these systems and a high robustness, security, and accuracy level throughout the entire system lifecycle will be required.
The EU Council and the European Parliament further specify and concretize certain requirements for high-risk AI systems. For example, the EU Council focuses on making the requirements for high-risk systems more specific and technically feasible. The Parliament’s proposal clarifies requirements for high-risk AI systems, emphasizing that more than merely categorization into a particular group is needed for a system to be classified as high-risk. It also allows greater flexibility for AI providers, allowing them to apply a self-assessment of the risk level of AI systems and communicate this information to the supervisory authority.
Is the upcoming AI regulation relevant to your company? Are you interested in utilizing AI-based technologies or directly involved in their development? Deloitte offers comprehensive support to ensure your compliance with the AI Act. Our services include assessing your company’s readiness for AI adoption and its impact, implementing the entire model lifecycle, creating and updating internal processes, assisting in strategy development, employee training, and much more. For more information, please visit our website.
Seminars, webcasts, business breakfasts and other events organized by Deloitte.