- The EU AI Act is the world’s first comprehensive legal framework for regulating artificial intelligence, aiming to balance innovation with the fundamental rights of EU citizens.
- It follows a risk-based approach, categorizing AI systems into four levels:
- Minimal risk
- Limited risk
- High risk (strict regulations apply)
- Unacceptable risk (strictly prohibited)
- Non-compliance can result in fines of up to €35 million or 7% of global annual turnover.
- The law came into effect in August 2024 and will be fully implemented within two years, with phased compliance deadlines.
- Kertos supports businesses in meeting compliance requirements for high-risk AI systems and offers tailored solutions to ensure full adherence to the EU AI Act.
The Meteoric Rise of AI Brings Risks
In recent years, the immense potential of artificial intelligence has fascinated us, particularly with the rapid expansion of large language models (LLMs), generative AI, and automation tools. However, the increasing use of AI for decision-making in high-risk areas such as healthcare, recruitment, education, and e-commerce has raised ethical, social, and economic concerns.
The shortcomings of AI—ranging from potential privacy violations and increased bias to opaque decision-making and algorithmic dehumanization—have led to serious concerns, making regulation necessary. AI development has reached a turning point where its capabilities and usage must be weighed against their societal impact. Existing regulations have not provided sufficient protection against the adaptability, ethical implications, and rapid evolution of AI technologies.
The EU AI Act
The EU AI Act, also known as the European Union Artificial Intelligence Act, was introduced to regulate the development and use of AI within the EU. It aims to strike a balance between AI-driven innovation and the fundamental rights of EU citizens.
The EU AI Act, together with the AI Innovation Package and the Coordinated Plan on AI, forms a comprehensive policy framework designed to support the development of trustworthy AI in Europe and beyond.
As the world’s first comprehensive legal framework for AI, this law has significant consequences for all major stakeholders in the AI value chain. The scope of the EU AI Act is similar to the General Data Protection Regulation (GDPR). This means that any provider bringing an AI system to market within the EU must comply with the law, regardless of whether they are based in a non-EU country. As long as non-EU AI systems impact EU citizens, they are subject to the extraterritorial application of the law.
Key Stakeholders in the AI Value Chain
- Providers: Developers of AI systems or general AI models
- Users: Individuals or businesses implementing AI systems
- Importers: Those bringing AI systems from outside the EU to the EU market
Risk-Based Regulatory Approach
The EU AI Act follows a risk-based approach by categorizing AI products into four risk levels based on their potential impact.
Minimal Risk
AI systems with minimal risk can be freely used without additional regulatory requirements. The majority of AI applications in the EU fall into this category.
Examples: AI-powered video games, spam filters
Limited Risk
AI systems classified as limited risk must meet transparency obligations. Developers are required to inform users when they are interacting with AI.
Example: Websites using chatbots must clearly indicate that users are communicating with an AI system, allowing them to make an informed decision about continuing or opting out.
Unacceptable Risk
Article 5 of the EU AI Act bans certain AI technologies that pose a threat to EU values and fundamental rights. The use, distribution, and deployment of such AI systems are strictly prohibited.
Examples of banned AI systems:
- Facial recognition technology scraping images from the internet
- Real-time biometric identification systems
- Social scoring systems lacking transparency, fairness, or accountability
High Risk
Article 6 defines AI systems as high risk if they have the potential to compromise safety, fundamental rights, or critical infrastructures. These systems must comply with strict regulations.
Examples of high-risk AI applications:
- AI used in hiring processes to evaluate job applicants
- Management of critical infrastructures where failure could endanger lives
- Automated migration, asylum, and border control decisions
- AI determining access to essential public services, including credit scoring and social benefits eligibility
- Judicial and democratic AI systems designed to influence election outcomes
- Biometric identification systems that are not explicitly banned, except those used solely for identity verification
Exceptions to the high-risk category may apply if specific legal criteria are met, such as:
- Performing a narrowly defined technical task
- Approval by an independent judicial or regulatory body, with limited geographic scope, database access, and timeframe
Compliance Requirements for High-Risk AI Systems
Providers of high-risk AI systems must undergo a compliance assessment and adhere to strict regulatory requirements, including:
- Ensuring compliance with EU safety, accuracy, transparency, and accountability standards
- Conducting internal or third-party compliance assessments
- Aligning AI systems with EU laws on human rights, data protection, and security
Enforcement and Penalties
Failure to comply with the EU AI Act can result in significant financial penalties:
- €35 million or 7% of global annual turnover for non-compliance with banned AI practices
- €15 million or 3% of global annual turnover for violations in the high-risk category
- €7.5 million or 1% of global annual turnover for providing false or incomplete compliance information
The law imposes lower fines for startups and SMEs to foster AI innovation while maintaining regulatory compliance.
According to Article 99(6) of the EU AI Act, all fines mentioned are capped at the lower amount between the stated percentage of turnover and the fixed fine amount.
The European Intellectual Property Office (EUIPO), established in February 2024, oversees the enforcement of the EU AI Act across all EU member states. Its mission is to create a safe AI ecosystem where AI technologies respect human dignity, rights, and public trust.
Implementation Timeline
The EU AI Act was first proposed in April 2021 and underwent extensive discussions before its official adoption on May 21, 2024. It passed with 523 votes in favor, 46 against, and 49 abstentions.
Key implementation milestones:
- August 1, 2024 – The AI Act comes into force
- 6 months – Companies must phase out AI practices classified as "unacceptable risk"
- 12 months – General-Purpose AI (GPAI) regulations apply to new GPAI models; existing models have 36 months to comply
- 24 months – Compliance regulations for high-risk AI applications take effect
- 36 months – AI systems classified as safety components under EU law must fully comply
Conclusion
The EU AI Act is a long-awaited regulatory framework addressing the opportunities and risks of artificial intelligence. Just as the GDPR transformed data privacy in 2018, the AI Act sets a new global standard for AI governance.
With Kertos, you can ensure compliance for high-risk AI systems.
Our experts assess your AI system’s risk level, guide you through necessary protective measures, and help you meet the requirements of the EU AI Act.