Introduction of high-risk AI systems
The AI Act, or the Artificial Intelligence Act, focuses primarily on the category of high-risk AI systems, as their inadequate use could endanger health, safety, or human rights. All three legislative bodies of the EU are aware of this and aspire to describe and address this issue as effectively as possible.
The AI CE Institute
At Deloitte, we know that AI is a key business topic of the future. That’s why we bring together diverse expertise and knowledge from industry and academia under one roof. How? With The AI CE Institute. More information can be found on our website.
The Commission’s proposal divides high-risk AI systems into two categories. The first category includes AI systems used in products covered by the European Parliament’s and Council’s joint Directive on general product safety. Such products include toys, the aero, space and automotive industries, medical devices, and even intelligent elevators utilizing AI for efficient operation. It is evident that these products could have fatal consequences for their users in case of malfunction. The Directive explicitly states that manufacturers are obliged to place only safe products on the market, thereby ensuring certain limitations on the use of AI.
The second category consists of AI systems used in potentially risky areas defined by EU:
- Critical infrastructure management and operation, where malfunction could jeopardize human lives.
- Education, where improperly configured systems could deny people access to education or discriminate. This category may include systems that evaluate exam results or allocate individuals to specific groups.
- Employment, workforce management, and access to self-employment activities. For instance, automated qualification assessment systems fall into this category.
- Access to essential private and public services and benefits, together with their utilization. A typical example might be an automatic evaluation of loan repayment capability.
- Law enforcement. In these cases, conflicts with the fundamental rights of affected individuals could arise.
- Migration, asylum, and border control.
- Biometric identification and categorization of physical persons.
All high-risk AI systems will need to meet various demanding requirements before being introduced to the EU market. The first step will be their registration in a database managed by the European Commission. Providers will be required to establish risk assessment mechanisms, demonstrate high-quality data, prepare detailed system documentation, and thoroughly monitor outputs. Additionally, human oversight over these systems and a high robustness, security, and accuracy level throughout the entire system lifecycle will be required.
The EU Council and the European Parliament further specify and concretize certain requirements for high-risk AI systems. For example, the EU Council focuses on making the requirements for high-risk systems more specific and technically feasible. The Parliament’s proposal clarifies requirements for high-risk AI systems, emphasizing that more than merely categorization into a particular group is needed for a system to be classified as high-risk. It also allows greater flexibility for AI providers, allowing them to apply a self-assessment of the risk level of AI systems and communicate this information to the supervisory authority.
Is the upcoming AI regulation relevant to your company? Are you interested in utilizing AI-based technologies or directly involved in their development? Deloitte offers comprehensive support to ensure your compliance with the AI Act. Our services include assessing your company’s readiness for AI adoption and its impact, implementing the entire model lifecycle, creating and updating internal processes, assisting in strategy development, employee training, and much more. For more information, please visit our website.