Introduction of AI systems with unacceptable, limited and minimal risks
While the primary goal of the AI Act is to regulate high-risk AI systems and ensure their safe operation within the EU, it is imperative to understand the entire spectrum of AI systems. The proposed risk-based approach also encompasses systems with unacceptable, limited, and minimal risks. Which types of systems fall into these categories, and what restrictions apply to them? You will find answers to these questions in our article.
The AI CE Institute
At Deloitte, we know that AI is a key business topic of the future. That’s why we bring together diverse expertise and knowledge from industry and academia under one roof. How? With The AI CE Institute. More information can be found on our website.
The European Parliament, through a vote on June 12, 2023, outlined its standpoint on AI-related issues and is currently undergoing discussions with EU member states’ representatives in the trialogue phase. The fundamental idea of the regulation is establishing rules that legally and morally align with EU values. Subsequently, the principal aim is to include aspects such as human oversight of AI processes, the safety of users and society, privacy assurance, transparency, prevention of discrimination, and overall precautions against deploying AI systems that could endanger health, security, fundamental rights, democracy, or the environment.
AI systems with unacceptable risk
AI systems with unacceptable risk are considered an undeniable threat and are outright prohibited. Their use is permitted only under specific circumstances defined by law. The initial proposal by the European Commission includes systems such as:
- Artificial intelligence systems employing manipulative “subliminal techniques.”
- Artificial intelligence systems used by public authorities or on their behalf for social scoring.
- Real-time remote biometric identification systems in publicly accessible areas.
- Real-time remote biometric identification systems used “retrospectively,” except by law enforcement agencies for investigating serious crimes and only with judicial authorization.
- Biometric categorization systems using sensitive data (e.g., gender, race, ethnic origin, citizenship, religion, and political orientation).
- Predictive policing systems based on profiling, location, or previous criminal activity.
- Emotion recognition systems in law enforcement, border control, workplaces, and educational institutions.
- Artificial intelligence systems exploiting vulnerable groups (physical or mental disabilities) to disrupt the behavior of individuals in such groups to cause physical or psychological harm.
- Unfocused collection of facial images from the internet or camera systems for face recognition (violating human rights and the right to privacy).
From this overview, it is evident that a significant number of systems is prohibited. However, this step proactively ensures the safety of previously unregulated applications.
The AI Act primarily focuses on the category of high-risk AI systems. Please refer to our previous article, where we delved into this topic to discover which AI systems are considered high-risk and the conditions they must meet before entering the EU market.
AI systems with limited and minimal Risk
The final segment of the legislation concerns AI systems with limited and minimal risk. Systems falling into the category of limited risk typically include those designed for interaction with physical persons (robots or chatbots), biometric categorization systems, or “deep fakes.” The primary requirement for providers is to ensure that users of these systems are informed about their interaction with them. This means users will be alerted before their first encounter with the system. The notification will explicitly highlight that an AI system has altered given audio or video file, followed by the responsible individual’s name.
Systems that do not fall into any of these categories pose only low or minimal risk, and according to the current legislation, no additional legal restrictions will be imposed on their development and application within the EU. However, the Union intends to encourage developers and companies to systematically adopt high-risk systems requirements even when developing low-risk applications voluntarily.
In conclusion, the AI Act’s risk-based approach aims to strike a balance between enabling innovation and protecting the rights, safety, and privacy of individuals and society as a whole. The EU aspires to ensure that AI technology is developed and utilized responsibly by categorizing AI systems based on risk levels and implementing appropriate regulations.
Does the upcoming regulation of artificial intelligence affect your company? Do you want to use AI-based technologies or are you directly involved in their development? Deloitte offers comprehensive support to ensure your compliance with the AI Act. Our services include, for example, assessing your company’s maturity to adopt AI solutions and their impact, whole model lifecycle implementation, creating and updating internal processes, supporting strategy development, employee training, and much more. For more information, visit our website.