Generative AI and AI Act
The introduction of the ChatGPT model at the end of 2022 marked a remarkable technological milestone – the deployment of generative artificial intelligence. This represents an attempt to develop what is known as Artificial General Intelligence (AGI), creating a model with no specific function. This model mimics human intelligence, particularly in general problem-solving with the ability to process, compose and invent concepts, generate text, images, videos, and sound. How does AI Act react to the issues surrounding generative artificial intelligence, and what do companies developing these models need to comply with?
The AI CE Institute
At Deloitte, we know that AI is a key business topic of the future. That’s why we bring together diverse expertise and knowledge from industry and academia under one roof. How? With The AI CE Institute. More information can be found on our website.
We have witnessed numerous successful attempts to not only replicate but, in some cases, surpass human intelligence. Machines solving Rubik’s cubes, excelling at chess or the ancient Chinese game Go, where computers dominated even the sharpest human opponents, serve as strong evidence. However, the biggest drawback of these models is their lack of generalization – they excel only in one specific or a small set of activities. Thanks to the versatility of large generative AI models like the GPTx series from OpenAI, Google’s Bard, or Stability AI’s Stable Diffusion, we are now in the midst of a revolution in interaction with technology.
Generative models, often referred to as foundation models, are deep neural networks capable of learning almost any behavioral pattern from training data. They are versatile in handling diverse data types, whether it’s text and natural language processing, audio, video recordings and content generation or information extraction. However, these abilities come at a cost, demanding massive quantities of high-quality input data. It’s a known fact that poor input data does not yield correct results.
With such powerful tools, a wave of caution must accompany the euphoria. Foundation models are easily exploitable, allowing the generation of fake news by making it challenging to distinguish between machine and human-generated content. There have been cases where trained language models spiraled out of control, producing unethical content, violating GDPR extensively, or users misusing generative AI for cybercrimes.
To understand how various versions of the AI Act approach the issue of autonomous artificial intelligence and foundation models, we must consider their release dates. Since the Commission issued its stance in 2021, long before the sharp rise in generative artificial intelligence’s popularity, the only mention of generative artificial intelligence is found in Chapter IV concerning limited-risk AI systems. This applies only for transparency obligations for “deep fakes,” audio or video recordings generated or manipulated using AI.
The term “general purpose AI system” first appears in the Council’s text from 2022, where it is added to the AI system’s definition, posing the same restrictions as for high-risk systems defined in Chapter III of the regulation. This regulation covers not only pre-trained models, but also generative models specifically fine-tuned by additional data.
Finally, foundation models are a key topic in Parliament’s version, which adds an article entirely focused on the responsible use of generative AI. Requirements contain categorization of generative models, input data quality, technical documentation, model stability and unbiasedness throughout its lifecycle, complemented by ESG compliance requirements. Within the text, a so-called “The European Artificial Intelligence Office” is defined, responsible with overseeing and maintaining awareness of the current state of technology to ensure the timelessness of this legislation. Non-compliance with AI Act guidelines can result in fines of up to 10 million euros or 2% of the previous year’s turnover for companies.
Does the upcoming regulation on artificial intelligence apply to your company? The rise of generative artificial intelligence presents exciting opportunities and challenges for businesses. While the AI Act sets the stage for responsible and ethical AI development and use, companies must navigate the evolving regulatory landscape carefully. Deloitte offers comprehensive support to ensure your compliance with the AI Act. Join us in embracing the future of AI innovation responsibly and confidently. For more information, please visit our website.