AGENTICFLUXUS

Search

Search pages, courses, and articles

Judge with gavel sentencing -- EU AI Act penalties and fines explained
Part 10 of 10 -- Keeping Humans Out of Jail -- Series Finale
Knowledge Library/Compliance/EU AI Act penalties explained
ComplianceArt. 99EnforcementPenaltiesFree

The true cost of AI non-compliance: EU AI Act penalties explained

EU AI Act penalty tiers explained. Article 99 fines up to EUR 35 million or 7% of global turnover, enforcement mechanisms, civil liability, reputational damage, and how to avoid becoming a cautionary tale.

Summary

The EU AI Act introduces the most significant penalties for AI-related violations in history. Three tiers of administrative fines -- up to EUR 35 million or 7% of global turnover at the top end -- create a penalty framework that exceeds even GDPR. But fines are only part of the picture. Civil liability, reputational damage, loss of market access, and insurance implications multiply the true cost of non-compliance. This guide breaks down Article 99 penalty tiers, explains how fines are calculated, maps the enforcement authorities, compares the AI Act with GDPR enforcement, covers the emerging civil liability landscape, and -- most importantly -- explains exactly how to avoid all of this. Because our AI Agents Guy has been through the arrest, the interrogation, the court appearance, and now the sentencing. Do not be this guy.

AF
Agentic Fluxus
EU AI Act compliance specialist · Published 27 Mar 2026 · 14 min
Key takeaways
1

Fines reach EUR 35 million or 7% of global turnover. The maximum penalty for deploying prohibited AI systems exceeds GDPR by a significant margin. And cumulative exposure under both regulations can reach 11%.

2

Enforcement starts in 2026 with real teeth. National market surveillance authorities are staffing up. The EU AI Office coordinates cross-border enforcement. First actions are expected by late 2026.

3

Fines are just the beginning. Civil liability, reputational damage, loss of market access, insurance exclusions, and competitive disadvantage multiply the true cost far beyond the administrative fine.

4

Compliance is dramatically cheaper than non-compliance. The cost of building a compliance programme is a fraction of even a modest fine. And the investment pays dividends in risk reduction, operational quality, and market trust.

In this guide

The three penalty tiers under Article 99

Article 99 of the EU AI Act establishes a three-tiered penalty framework, modelled on GDPR's approach but with higher ceilings. The tiers correspond to the severity of the violation, with the harshest penalties reserved for the most dangerous AI practices.

Tier 1 -- Prohibited practices
EUR 35 million or 7% of global turnover

Violations of Article 5: social scoring, manipulative AI, exploitation of vulnerable persons, real-time biometric identification without authorisation, emotion recognition in workplaces and schools

Tier 2 -- High-risk AI obligations
EUR 15 million or 3% of global turnover

Violations of high-risk AI requirements (Articles 6-49): inadequate risk management, missing technical documentation, insufficient human oversight, non-compliant conformity assessments, deployer obligation failures under Article 26

Tier 3 -- Information and cooperation
EUR 7.5 million or 1% of global turnover

Supplying incorrect, incomplete, or misleading information to authorities. Failure to cooperate with market surveillance authorities. Non-compliance with transparency obligations for certain AI systems

Key detail

For all three tiers, the fine is the higher of the fixed amount or the percentage of turnover. A company with EUR 1 billion in global turnover violating Article 5 faces up to EUR 70 million (7% of EUR 1 billion), not EUR 35 million. The fixed amounts are the floor for large enterprises, not the ceiling.

EUR 35M
or 7% turnover (max)
3 tiers
of penalties
Aug 2026
full enforcement

How fines are calculated

The EU AI Act gives national authorities significant discretion in determining fine amounts within the maximum ceilings. Article 99 specifies the factors that must be considered.

Nature, gravity, and duration. A one-time incident affecting a small number of people will attract a lower fine than a systematic violation affecting thousands over months. The seriousness of the harm -- whether it affects health, safety, or fundamental rights -- weighs heavily.

Intentional or negligent character. Deliberate non-compliance is treated more harshly than negligent failure. An organisation that knowingly deploys a prohibited AI system faces higher penalties than one that failed to classify a system correctly.

Actions to mitigate damage. Prompt, effective action to contain and remediate an incident is a genuine mitigating factor. This is why having an incident response plan matters -- it demonstrates preparedness and swift action.

Degree of responsibility. Authorities will assess the technical and organisational measures you had in place. A documented compliance programme -- even if imperfect -- demonstrates good faith. Having nothing demonstrates negligence.

Previous infringements. Repeat offenders face escalating penalties. First-time violations with demonstrated corrective action may receive lighter treatment.

Cooperation with authorities. Full, transparent cooperation with investigations is a significant mitigating factor. Obstruction, delay, or misleading information is an aggravating factor and may trigger additional Tier 3 penalties.

Size and market share. The proportionality principle applies. Fines should be effective and dissuasive but not disproportionate to the undertaking's capacity. SME protections under Article 99(6) cap fines at the percentage thresholds.

Administrative fines should be effective, proportionate and dissuasive. They should take into account the interests of small-scale providers and start-ups and their economic viability.

EU AI Act, Recital 164, Regulation (EU) 2024/1689

Who enforces the EU AI Act

Understanding the enforcement landscape is essential for knowing where your compliance obligations are supervised and who has the power to investigate and fine you.

National market surveillance authorities

Each EU Member State must designate one or more national authorities responsible for enforcing the AI Act. These authorities have the power to conduct inspections, request documentation, order corrective actions, restrict or withdraw AI systems from the market, and impose administrative fines. Key designations as of early 2026 include the Netherlands (Autoriteit Persoonsgegevens), France (CNIL for data-related AI, DGCCRF for market aspects), Germany (BNetzA for general AI, BaFin for financial AI), and Ireland (yet to be formally confirmed, likely the Data Protection Commission for most AI systems).

The European AI Office

Established within the European Commission, the AI Office coordinates enforcement across Member States and has direct oversight of general-purpose AI models. It can investigate GPAI providers, request information, and impose fines for GPAI-specific violations. The AI Office also issues guidance, maintains a database of high-risk AI systems, and supports national authorities with technical expertise.

Cross-border enforcement

When an AI system operates across multiple EU Member States, enforcement is coordinated between the relevant national authorities, with the AI Office facilitating cooperation. The authority in the Member State where the provider or deployer is established typically takes the lead, but any affected Member State can initiate enforcement proceedings. This means a company based in the Netherlands using an AI system that affects people in Germany, France, and Spain could face enforcement action from any or all of those jurisdictions.

How exposed is your organisation?
Our free readiness assessment maps your penalty exposure and compliance gaps in 10 minutes.

Civil liability: beyond administrative fines

Administrative fines under Article 99 are only one dimension of the cost of non-compliance. Civil liability -- the obligation to compensate individuals who suffer harm from non-compliant AI -- can dwarf the administrative fines, especially for organisations with widespread AI deployment.

The AI Liability Directive

The proposed AI Liability Directive (currently in legislative process) would create a harmonised framework for civil claims related to AI systems across the EU. Key features include a rebuttable presumption of causation -- if a claimant can show that an AI provider or deployer failed to comply with the AI Act, and the type of harm suffered is the type the violated rule was designed to prevent, a causal link is presumed. The burden of proof shifts to the defendant to prove the AI system did not cause the harm. This is a significant departure from traditional liability, where the claimant must prove causation.

Existing liability under national law

Even before the AI Liability Directive is adopted, individuals can bring civil claims under existing national tort law, product liability rules (the revised Product Liability Directive now explicitly covers software and AI), and consumer protection regulations. Non-compliance with the AI Act is likely to be treated as strong evidence of negligence in these proceedings.

Class actions and collective redress

The EU Representative Actions Directive (2020/1828) allows qualified entities to bring collective actions on behalf of consumers. AI-related harms -- particularly those affecting large groups through biased or discriminatory systems -- are prime candidates for collective redress. A single AI system that discriminates in credit scoring could affect thousands of consumers, creating significant aggregate liability exposure.

Reputational and market costs

The GDPR enforcement experience has shown that reputational damage often exceeds the direct financial penalty. The same pattern is expected for AI Act enforcement.

Public enforcement decisions. National authorities typically publish enforcement decisions, including the identity of the offender and the details of the violation. This creates lasting reputational harm that search engines preserve indefinitely.

Market access restrictions. Authorities can order the withdrawal or recall of non-compliant AI systems from the EU market. For AI-dependent businesses, losing access to the 450-million-person EU market is potentially existential.

Customer and partner confidence. B2B customers increasingly require AI compliance assurances in procurement processes. A public enforcement action undermines trust and can trigger contract termination clauses.

Talent attraction. Technical talent, particularly in AI and data science, increasingly considers employer ethics and compliance track records. A reputation for regulatory violations makes hiring harder in a competitive market.

Investor and board scrutiny. AI governance failures are increasingly material for investors, auditors, and board members. ESG reporting frameworks now incorporate AI governance metrics.

Comparison with GDPR enforcement: what to expect

The GDPR enforcement trajectory from 2018 to 2026 offers the most useful preview of how AI Act enforcement is likely to unfold.

GDPR enforcement lessons

Slow start, then acceleration. GDPR enforcement was modest in the first 12 to 18 months, then accelerated sharply. By 2025, cumulative GDPR fines exceeded EUR 4.5 billion. Expect a similar pattern for the AI Act.

Headline-making early cases. The first major GDPR fines targeted high-profile companies to establish deterrence. AI Act enforcement is likely to follow the same playbook, targeting clear-cut violations by visible companies.

Complaint-driven enforcement. A significant portion of GDPR enforcement was triggered by individual complaints and whistleblower reports. Article 87 whistleblower protections in the AI Act will likely generate a steady stream of enforcement triggers.

Cross-border complexity. GDPR's one-stop-shop mechanism created delays and jurisdictional disputes. The AI Act's enforcement coordination may face similar challenges, but the AI Office's direct oversight of GPAI providers should reduce some friction.

Key differences from GDPR

The AI Act penalties are higher (7% vs 4% maximum), the scope extends beyond data protection to safety and fundamental rights, and the enforcement infrastructure benefits from lessons learned during GDPR implementation. National authorities are being set up with AI-specific technical expertise from day one, something that took years to develop for GDPR.

For a detailed comparison of how these frameworks interact, see our guide on AI literacy obligations under Article 4, which explains how training requirements under the AI Act complement GDPR data protection training.

Insurance implications

The insurance landscape for AI-related risk is evolving rapidly but remains immature. Most organisations are currently underinsured for AI Act liability.

Standard policies exclude regulatory fines. Most general liability and professional indemnity policies do not cover administrative fines. Some explicitly exclude penalties under data protection and AI regulation.

D&O insurance is patchy. Directors and officers policies may cover investigation costs and legal defence but typically exclude the fines themselves. Some policies are being updated to include AI governance failures, but this is not yet standard.

Cyber insurance has limited coverage. Cyber liability policies may cover AI-related data breaches and associated costs, but they will not cover administrative fines for AI Act violations unrelated to data security.

AI-specific insurance is emerging. A small number of insurers are developing bespoke AI compliance insurance products. Premiums are high and coverage terms vary widely. The market will mature as enforcement creates claims data.

Bottom line

Do not rely on insurance to cover AI Act non-compliance. The cost of building and maintaining a compliance programme is a fraction of even a modest fine, and it creates business value beyond risk mitigation. Insurance is a supplement to compliance, not a substitute for it.

How to avoid penalties: the compliance path

After ten articles in this series -- from our AI Agents Guy happily using AI, through the police raid, the arrest, the interrogation, the court appearance, and now the sentencing -- the message is clear. Compliance is not optional, it is not a bureaucratic burden, and it is dramatically cheaper than the alternative.

Here is what the full compliance path looks like. We have covered each element in detail across this series.

Step 1: Know what you have. Complete an AI inventory across your entire organisation. You cannot comply with rules about AI systems you do not know you are using. See our 10-step compliance checklist.

Step 2: Classify your risk. Determine which of your AI systems are high-risk under Annex III. Most organisations discover their exposure is lower than feared -- but some discover it is higher. See our risk classification guide.

Step 3: Train your people. Article 4 AI literacy is already in force. Every person who touches AI in your organisation needs proportionate training. See our workforce training guide.

Step 4: Build your infrastructure. Choose the right deployment model -- cloud, local, or hybrid -- based on your data sensitivity and regulatory exposure. See our cloud vs local AI guide.

Step 5: Prepare for incidents. Build an incident response plan before you need it. Know your reporting obligations and have templates ready. See our incident response guide.

Step 6: Document everything. Compliance without documentation is not compliance. Build audit trails for your AI inventory, risk assessments, training records, impact assessments, and incident logs.

Step 7: Review and iterate. Compliance is a cycle, not a destination. Schedule quarterly reviews, track regulatory updates, and update your programme as your AI usage evolves.

The organisations that start now -- even if their programme is imperfect -- will be in a fundamentally stronger position than those that wait for the first enforcement wave to panic.

EU AI Act compliance training for your entire organisation. Certificates included.

Series finale
Don't be this guy.

Our AI Agents Guy went from happily using AI to getting arrested, interrogated, tried, and sentenced -- all because he did not take EU AI Act compliance seriously. You have read the full story. You know what is at stake. The only question left is: what are you going to do about it?

Prefer a guided approach?

Talk to our compliance team

Book a free call to discuss your organisation's compliance status and build a plan that protects you from enforcement risk.

Frequently asked questions

What is the maximum fine under the EU AI Act?+
How are EU AI Act fines calculated?+
Who enforces the EU AI Act?+
Are there reduced penalties for SMEs and startups?+
Can individuals be personally liable for AI Act violations?+
How does the EU AI Act compare to GDPR in terms of penalties?+
When will the first EU AI Act fines be issued?+
Does compliance insurance cover EU AI Act fines?+

Sources and further reading

Official EU institutional sources and enforcement guidance.

EU AI Act full text (Regulation (EU) 2024/1689)
Official Journal of the European Union
Visit
Article 99 -- Administrative fines
EU AI Act Service Desk
Visit
European AI Office overview
European Commission
Visit
AI Liability Directive (proposed)
European Commission
Visit
Revised Product Liability Directive
EUR-Lex
Visit
GDPR enforcement tracker
Enforcementtracker.com
Visit

All 10 parts of the Keeping Humans Out of Jail series.

People
People pillar
How to train your workforce for EU AI Act compliance
22 min read
Compliance
Compliance pillar
EU AI Act compliance checklist: 10 steps
14 min read
Compliance
Compliance pillar
AI incident response: when your AI fails
15 min read
This is the final article in the 10-part Keeping Humans Out of Jail series. The series is regularly updated to reflect the latest regulatory guidance from the European Commission, AI Office, and national market surveillance authorities. Last reviewed 27 March 2026. Agentic Fluxus provides EU AI Act compliance training, tools, and infrastructure for organisations of all sizes. Start your compliance journey