Data Governance

EU AI Act: Shaping a Safe AI Future

The increasing use of AI to make decisions in high-risk areas such as healthcare, human resources, education, and e-commerce has raised ethical, societal, and economic concerns.

Autor
Dr. Kilian Schmidt
Datum
Aktualisiert am
28.2.2025
EU AI Act: Shaping a Safe AI Future
  • The EU AI Act is the world’s first comprehensive regulatory framework for artificial intelligence, addressing ethical and societal challenges.
  • A risk-based approach categorizes AI systems into four risk levels and imposes strict requirements on high-risk applications.
  • With extraterritorial reach and high penalties, it ensures that all providers comply with the regulations.
  • Gradual implementation from 2024 aims to promote safe and transparent AI systems in Europe.

AI in a dilemma: Why clear rules are necessary for responsible use.

Recent years have us excited about the humongous potential of artificial intelligence, especially with the rapid proliferation of large language models (LLMs), generative AI, and automation tools. However, its growing adoption for decision-making in high-risk areas like healthcare, recruitment, education, and e-commerce has sparked ethical, societal, and economic concerns.

AI's flaws associated with potential infringement of individuals’ privacy, reinforced biases, opaque decision processes, and algorithmic dehumanisation raise serious concerns and the need for it to be regulated. It has reached a tipping point in its journey where its capabilities and usage have to be balanced with its impact on society. The existing regulations did not provide sufficient protection against its adaptive capacity, ethical implications, and rapid advancement.

The EU AI-Act

The EU AI Act, also called the Artificial Intelligence Act of the European Union, has been enacted with the purposes of governing the development and/or use of AI in the EU. It aims to strike a delicate balance between innovation that AI brings and the fundamental rights of European Union citizens. The EU AI Act, together with the AI Innovation Package and the Coordinated Plan on AI, forms a consolidated package of policy measures designed to support the development of trustworthy AI in Europe and beyond.

It’s the first-ever comprehensive legal framework on AI worldwide and has dire implications for all key operators involved in the AI value chain. The scope of the EU AI Act applies similarly to GDPR. It means any provider who has their AI system put on the market or in use within the EU is eligible to comply with the law, regardless of whether they belong to a non-EU state. Also, as long as the AI systems of non-EU companies affect EU residents, they’re bound to extraterritorial application of the Act.

Key operators in the AI value chain:

  • Providers: Developers of AI systems or GPAI (General Purpose AI) models
  • Deployers: Users or implementors of AI systems
  • Importers: Those who bring AI systems into the EU market from outside the EU

Risk-based approach to regulation

The EU Act adopts a risk-based approach to regulation. It categorises AI products in four categories, based on the risk level associated with each. Four classes of risk include minimal risk, limited risk, high risk, and unacceptable risk.

Minimal risk

The Act allows AI products with minimal risk to be freely used. Developers of these products need no additional precautions. A vast majority of AI applications used within the EU fall into this category. Examples include AI-enabled video games, spam filters, etc.

Limited risk

Limited risk refers to AI systems where developers are required to ensure transparency by disclosing that users are interacting with AI. For instance, websites’ use of chatbots for support needs informing users that they are interacting with a machine so they can make an informed decision regarding whether to step back or continue.

Unacceptable risk

Article 5 categorises certain AI technologies that are harmful and deemed to violate EU values and fundamental rights of EU citizens as posing an “unacceptable risk." Use, placing on the market, and putting into service of such products are strictly prohibited under the Act. Some examples of unacceptable use cases include untargeted scraping of facial images from the internet, real-time biometric identification systems, and social scoring systems that lack transparency, fairness, or accountability, especially in decision-making processes.

High risk

Article 6 of the Act labels AI systems having the potential to negatively affect safety, fundamental rights, or other critical aspects as high risk. Also, such products, or safety components of products, are considered high risk and are regulated under specific EU laws referenced by the Act, such as toy safety and in vitro diagnostic medical device laws.

Thresholds that lead an AI system to being high risk include the following:

  • AI systems that are used to evaluate applicants in employment contexts.
  • Management of critical infrastructure that could put the life and health of citizens at risk.
  • Automated management of migration, asylum, or border control.
  • Determination of access to essential public services, including systems that access eligibility for public benefits and evaluate credit scores.
  • Judicial and democratic systems that are intended to influence the outcome of elections.
  • Biometric identification systems that are not prohibited, except for systems whose sole purpose is to verify a person’s identity.

Exemptions from the high-risk category can be triggered upon fulfilling one or more of the criteria specified by the Act, including:

  • A narrow procedural task has to be performed with the use of AI applications.
  • A judicial or other independent authority authorises the use of the AI application with limits set to geographic reach, databases searched, and time frame.

Providers of high-risk AI systems need to undergo the conformity assessment and comply with the requirements as mentioned below:

  • Verifying compliance with technical standards related to safety, accuracy, transparency, and accountability.
  • Conducting an internal conformity assessment or undergoing a third-party assessment
  • Ensuring that the AI system aligns with EU regulations on human rights, data privacy, and safety.

Enforcement and implementation

Non-compliance with prohibited AI practices can result in fines of up to EUR 35 million or 7% of worldwide annual turnover, whichever is higher. For violation of the high-risk category, fines up to a higher of EUR 15 million or 3% of worldwide annual turnover could be levied.

Additionally, misleading authorities by sharing incorrect or incomplete information can result in EUR 7,500,000, or 1% of worldwide annual turnover, whichever is higher. For SMEs and startups, the Act imposes lower fines, citing innovation as the key factor, which largely emanates from the start-up ecosystem.

As per EU AI Act Article 99 para 6, each fine referred to in this Article shall be up to the percentages or amounts referred to in paragraphs 3, 4, and 5, whichever thereof is lower.

Established in February 2024 by the European Commission, the European AI Office oversees the Act’s enforcement and implementation within the member states. It aims to create a safe environment for humans with respect to the use of AI, wherein AI technologies respect their dignity, rights, and trust.

Timeline of implementation

  • Entered into force on August 1, 2024, and will be fully applicable 2 years later.
  • From the date of enactment, the Act has specified a six-month period for organisations to phase out “unacceptable risk.”
  • At 12 months, the rules for GPAI will take effect for new GPAI models; those who are already on the market 12 months prior to the Act coming into force will have 36 months from the date of entry into force to comply.
  • At 24 months, the rules for regulating “high-risk” applications will take effect.
  • At 36 months, the rules for AI systems that are products or safety components of products regulated under specific EU laws will apply.

Conclusion

The need for targeted AI-specific legislation has been fulfilled with the enactment of the EU AI Act, a much-awaited regulation. Post enactment, it is garnering worldwide attention of mixed kinds. It is also regarded as a benchmark for the AI industry, similar to what the advent of GDPR in 2018 represented for data privacy.

With Kertos, you can get started on meeting the conformity requirement for high-risk AI systems. Our experts will assess your AI system to check their risk level, guide you through implementing the necessary safeguards, and ensure compliance with the EU AI Act.  

Der Founder-Guide zur NIS2: Bereite dein Unternehmen jetzt vor

Schütze dein Startup: Entdecke, wie sich NIS2 auf dein Unternehmen auswirken kann und was du jetzt beachten musst. Lies jetzt das kostenlose Whitepaper!

Der Founder-Guide zur NIS2: Bereite dein Unternehmen jetzt vor

Schütze dein Startup: Entdecke, wie sich NIS2 auf dein Unternehmen auswirken kann und was du jetzt beachten musst. Lies jetzt das kostenlose Whitepaper!

EU AI Act: Shaping a Safe AI Future
Bereit, deine Compliance auf Autopilot zu setzen?
Dr Kilian Schmidt

Dr Kilian Schmidt

CEO & Co-Founder, Kertos GmbH

Dr. Kilian Schmidt entwickelte schon früh ein starkes Interesse an rechtlichen Prozessen. Nach seinem Studium der Rechtswissenschaften begann er seine Karriere als Senior Legal Counsel und Datenschutzbeauftragter bei der Home24 Gruppe. Nach einer Tätigkeit bei Freshfields Bruckhaus Deringer wechselte er zu TIER Mobility, wo er als General Counsel maßgeblich am Ausbau der Rechts- und Public Policy-Abteilung beteiligt war - und das Unternehmen von einer auf 65 Städte und von 50 auf 800 Mitarbeiter vergrößerte. Motiviert durch die begrenzten technologischen Fortschritte im Rechtsbereich und inspiriert durch seine beratende Tätigkeit bei Gorillas Technologies, war er Co-Founder von Kertos, um die nächste Generation der europäischen Datenschutztechnologie zu entwickeln.

Über Kertos

Kertos ist das moderne Rückgrat der Datenschutz- und Compliance-Aktivitäten von skalierenden Unternehmen. Wir befähigen unsere Kunden, integrale Datenschutz- und Informationssicherheitsprozesse nach DSGVO, ISO 27001, TISAX®, SOC2 und vielen weiteren Standards durch Automatisierung schnell und günstig zu implementieren.

Bereit für Entlastung in Sachen DSGVO?

CTA Image