AI Act and other relevant European legislation
As regulatory norms such as the AI Act (Artificial Intelligence Act) are often presented and explained in isolation, certain aspects or motivations behind their content can be lost due to a lack of context or the vision of the institution issuing them. The following article aims to provide an overview of the relevant literature and strategies through which the European Union's legislators have crafted the AI Act.
The AI CE Institute
At Deloitte, we know that AI is a key business topic of the future. That’s why we bring together diverse expertise and knowledge from industry and academia under one roof. How? With The AI CE Institute. More information can be found on our website.
The AI Act proposal is built on the principle of a unified internal market for artificial intelligence. Its goal is to harmonize the currently sporadic and fragmented array of local regulations, which could complicate the free movement of digital products in the near future. The application of EU law in the field of AI will be executed through regulations which are binding and directly applicable to member states.
Given the AI Act’s high complexity and broad applicability, EU authorities must reconcile it with a considerable body of existing legislation, including the EU Charter of Fundamental Rights, data protection, consumer protection, and non-discrimination and gender equality. The proposal aligns with the General Data Protection Regulation (GDPR) and the Data Protection Directive for Law Enforcement. Simultaneously, it is necessary to ensure full compliance with existing industry legal frameworks where high-risk AI systems are already in use or expected to be employed in the foreseeable future.
Prohibited or Restricted Practices in AI and the Need for Legal Framework
Various general or sector-specific regulations will need to address the requirements for digital products outlined below. In the field of AI development, three main pillars have been defined, as discussed in previous articles, and regulatory interventions will be set based on agreed criteria:
- The first pillar prohibits the use of models with unacceptable risks that subliminally influence human behavior, conduct social scoring, or employ remote biometric identification.
- The second pillar regulates high-risk systems that may pose dangers to their users in areas such as healthcare, education, or transportation.
- The third pillar establishes fundamental rules for models with limited or minimal risk, including “deep fake” models and chatbots.
Suppose AI systems are also integrated into the security components of products. In that case, the EU requires them to be incorporated into existing sector-specific legal frameworks for safety, especially those covered by the new legislative framework for industrial products (NLF). Examples of such products include heavy machinery, toys, or medical devices.
Alignment with other EU policies
The AI Act is intended to be a means of building Europe’s digital decade. The AI Act proposal is part of the European approach to artificial intelligence represented by the White Paper on AI, ensuring that the text aligns with the current European AI strategy. It also aligns with the overall digital strategy of the European Commission regarding its contribution to supporting technologies that serve people. Furthermore, the proposal significantly enhances the role of the Union in shaping global standards and regulations for trustworthy AI in line with the values and interests of the European Union.
From a legal perspective, the AI Liability Directive is a crucial text that sets rules for non-contractual civil liability for damages caused by artificial intelligence. The directive addresses the burden of proof and ensures that legitimate compensation claims are not obstructed, and those harmed are protected, as is the case with traditional technologies. This is especially important when AI systems are sold as products, as the model’s creator is liable to third parties.
With the continuous rise in cyberattacks on AI systems and businesses generally, the European Union is responding by raising the requirements for companies’ cybersecurity. The new cybersecurity law implementing NIS2 is expected to be effective in the second half of 2024. It brings a framework representing governance structures for ensuring the secure operation of systems and networks across selected sectors. The Cyber Resilience Act (CRA), whose final version is expected to be available next year and anticipated to be effective in 2026, introduces the so-called cybersecurity certification scheme for products and services. The regulation is also referred to as the world’s first Internet of Things (IoT) legislation, introducing rules for devices connected to the Internet, such as smart speakers, watches, computer games, and more. Manufacturers of these software and hardware products will now need to meet the cybersecurity requirements outlined in CRA. AI systems falling under the same criteria will be required to fulfill the conditions of this additional European legislation.
Does the upcoming regulation of artificial intelligence affect your company? Do you want to use AI-based technologies or are you directly involved in their development? Deloitte offers comprehensive support to ensure your compliance with the AI Act. Our services include, for example, assessing your company’s maturity to adopt AI solutions and their impact, whole model lifecycle implementation, creating and updating internal processes, supporting strategy development, employee training, and much more. For more information, visit our website.