AGENTICFLUXUS

Search

Search pages, courses, and articles

AI Agents Guy in court with handcuffs -- AI incident response under the EU AI Act
Part 9 of 10 -- Keeping Humans Out of Jail
Knowledge Library/Compliance/AI incident response guide
ComplianceArt. 26Art. 62Art. 87Free

AI incident response: what to do when your AI system fails

A practical guide to AI incident response under the EU AI Act. The 6-step response plan, reporting obligations, whistleblower protections, and real incident examples every business should study.

Summary

AI systems fail. They hallucinate, they discriminate, they leak data, and they make confidently wrong decisions that humans act on. The EU AI Act introduces formal incident reporting obligations for deployers and providers of high-risk AI systems. But every organisation using AI -- regardless of risk classification -- needs an incident response plan. This guide covers what counts as an AI incident, the Article 26(5) reporting requirements, a practical 6-step response framework, whistleblower protections under Article 87, real incident examples, and how to build an AI incident response team. Because when your AI fails, the question is not if you will be investigated -- it is whether you can demonstrate you responded properly.

AF
Agentic Fluxus
EU AI Act compliance specialist · Published 27 Mar 2026 · 15 min
Key takeaways
1

AI incidents are inevitable. No AI system is infallible. Hallucination, bias, data leaks, and system failures are not edge cases -- they are features of how current AI technology works.

2

The EU AI Act requires formal incident reporting. Deployers of high-risk AI systems must report serious incidents to authorities within 15 days. But all organisations should have a response plan.

3

Whistleblowers are protected. Article 87 protects anyone who reports AI Act violations. Your staff should know this and have clear channels to raise concerns.

4

How you respond matters more than whether it happens. Regulators will judge your response process, documentation, and corrective actions -- not the mere fact that an incident occurred.

In this guide

What counts as an AI incident

The EU AI Act does not provide an exhaustive definition of an AI incident, which means the scope is deliberately broad. Any event where an AI system causes or could have caused harm -- to health, safety, fundamental rights, property, or the environment -- qualifies.

In practice, AI incidents fall into several categories that every organisation using AI should be prepared for.

Bias and discrimination

An AI-powered recruitment tool systematically downranks CVs from candidates with non-Western European names. A credit scoring system assigns lower scores to applicants from certain postcodes. An insurance pricing model charges higher premiums based on correlations with protected characteristics. These are not hypothetical scenarios -- they are documented incidents that have already occurred across Europe.

Hallucination and misinformation

A legal AI tool generates a filing that cites cases that do not exist. A customer service chatbot provides incorrect product safety information that a customer acts on. A medical information assistant suggests a drug interaction that is factually wrong. Hallucination -- where AI models generate plausible but false information -- is one of the most common and dangerous failure modes.

Data leaks and privacy violations

An AI assistant trained on internal documents inadvertently reveals confidential information in responses to different users. A prompt injection attack extracts sensitive data from an AI system's context. Customer personal data is included in AI training data without consent. These incidents trigger both AI Act and GDPR obligations simultaneously.

System failures and reliability issues

An autonomous AI system in a manufacturing environment stops responding during a safety-critical operation. An AI-powered traffic management system produces erratic routing during peak hours. A fraud detection system fails silently, allowing fraudulent transactions to proceed unchecked for hours.

Key distinction

Near-misses count. An AI system that produces a biased output that is caught and corrected before it affects anyone is still an incident worth documenting and investigating. Near-misses are your early warning system. Organisations that investigate near-misses catch systemic problems before they cause harm.

15 days
reporting deadline
Art. 26(5)
deployer reporting
Art. 87
whistleblower protection

Article 26(5): deployer reporting obligations

Under Article 26(5) of the EU AI Act, deployers of high-risk AI systems have specific obligations when they become aware of a serious incident.

Deployers of high-risk AI systems shall inform the provider or distributor and the relevant market surveillance authority of any serious incident and shall take appropriate measures to address the incident.

EU AI Act, Article 26(5), Regulation (EU) 2024/1689

What triggers the reporting obligation

Serious incident involving a high-risk AI system. The incident must involve an AI system classified as high-risk under Article 6 and Annex III. If you are unsure whether your AI systems are high-risk, see our risk classification guide.

Causal link established. The reporting clock starts when you establish a causal link between the AI system and the incident, not when you first notice something is wrong. This means initial investigation time is built into the framework.

15-day reporting window. You must report to the market surveillance authority no later than 15 days after becoming aware of the incident. For incidents that also involve personal data breaches, the 72-hour GDPR deadline runs in parallel.

Who you must notify

The AI system provider. The company that built or supplied the AI system. They need to investigate whether the issue is systemic and affects other deployers.

The market surveillance authority. The national authority designated by each EU Member State to enforce the AI Act. In the Netherlands, this is the Autoriteit Persoonsgegevens (AP) for most AI systems.

Affected individuals. Where the incident has caused or is likely to cause harm to individuals, they should be informed promptly. This is especially important for incidents involving fundamental rights impacts.

Provider obligations under Article 62

Providers of high-risk AI systems have parallel reporting obligations under Article 62. When a provider becomes aware of a serious incident -- whether through their own monitoring, deployer notification, or third-party reports -- they must investigate the root cause, assess whether the issue is systemic, implement corrective actions, and report to the market surveillance authority. Providers must also notify all known deployers of the issue and any necessary corrective measures.

Not sure if your AI systems are high-risk?
Our free readiness assessment maps your AI exposure and reporting obligations in 10 minutes.

The 6-step AI incident response plan

Based on the EU AI Act requirements and emerging best practice, here is a structured 6-step response framework that scales from sole traders to enterprise organisations.

1

Step 1: Detect and contain

Identify the incident as quickly as possible. If the AI system is actively causing harm, take immediate action to stop or limit the damage. This may mean disabling the system, reverting to manual processes, or restricting access. Speed matters here -- the longer a harmful AI system operates, the greater the damage and the harder the remediation. Ensure your AI systems have monitoring in place that can detect anomalies, unusual outputs, and error patterns.

2

Step 2: Assess severity and classify

Determine the severity of the incident and classify it according to your incident taxonomy. Key questions: Who was affected? How many people? What type of harm occurred or could occur? Is the AI system high-risk under the EU AI Act? Does this also constitute a personal data breach under GDPR? The classification determines your reporting obligations and response timeline.

3

Step 3: Investigate root cause

Conduct a thorough investigation into what went wrong. Was it a model failure (hallucination, bias, drift)? A data quality issue? A system integration problem? A user error? Or was it an adversarial attack (prompt injection, data poisoning)? Document the investigation process and findings. This is critical for both the regulatory report and for preventing recurrence.

4

Step 4: Notify stakeholders

Trigger your notification workflow. For high-risk AI systems: notify the provider, the market surveillance authority (within 15 days), and affected individuals. If there is a personal data breach: notify the data protection authority (within 72 hours) and affected data subjects (without undue delay if high risk to rights). For all incidents: inform internal stakeholders, management, and the AI governance function.

5

Step 5: Remediate and implement corrective actions

Fix the immediate problem and implement measures to prevent recurrence. This may involve retraining the model, updating safety filters, modifying the system prompt, adding human oversight steps, improving input validation, or replacing the AI system entirely. Document every corrective action and the rationale behind it.

6

Step 6: Review and improve

After the incident is resolved, conduct a post-incident review. What worked in your response? What did not? Are there systemic issues that need addressing? Update your incident response plan based on lessons learned. Share relevant findings (appropriately anonymised) with your team as part of your AI literacy training programme.

Whistleblower protections under Article 87

Article 87 of the EU AI Act establishes specific protections for individuals who report infringements of the regulation. This is a critical safeguard because AI incidents are often first noticed by the people closest to the system -- operators, developers, and end users -- who may be reluctant to report problems if they fear retaliation.

Who is protected

Employees. Any employee who reports an AI Act infringement to a competent authority is protected from retaliation. This includes dismissal, demotion, harassment, and any other form of disadvantageous treatment.

Contractors and service providers. External parties who discover and report AI incidents while working with an organisation are also covered.

Other persons. The protection extends broadly to any person who has a reasonable belief that an AI Act infringement has occurred or is occurring.

What organisations should do

Establish internal reporting channels. Create clear, accessible channels for staff to report AI incidents and concerns. These should be separate from regular management chains to encourage reporting.

Train staff on their rights. As part of your AI literacy programme, ensure all staff know about whistleblower protections and how to use reporting channels.

Protect reporters. Implement anti-retaliation policies and make it clear that reporting AI incidents is expected and valued, not punished.

Document the process. Maintain records of all reports received, investigations conducted, and outcomes. This is part of your compliance evidence.

Legal context

Article 87 builds on the EU Whistleblower Directive (2019/1937), which most Member States have already transposed into national law. The AI Act extends these protections specifically to AI-related reporting. Organisations that already have whistleblower mechanisms under the Directive can extend them to cover AI incidents.

Real AI incident examples

Learning from real incidents -- both your own and others' -- is one of the most effective ways to improve your AI governance. Here are documented examples that illustrate the types of incidents European businesses should prepare for.

The hallucinating legal assistant

In 2023, a US law firm submitted a court filing prepared with the help of ChatGPT. The filing cited six legal cases that did not exist -- the AI had hallucinated them with convincing case names, citations, and summaries. The lawyers were sanctioned by the court. Under the EU AI Act, legal AI tools used in the administration of justice are classified as high-risk (Annex III, point 8). Any organisation using AI for legal work must implement human oversight to verify factual claims before they are acted upon.

The biased recruitment engine

Amazon developed an AI recruitment tool that systematically downranked CVs from female candidates because it was trained on historical hiring data that reflected existing gender bias. The system was scrapped after internal discovery. Under the EU AI Act, AI systems used in recruitment and HR decisions are explicitly classified as high-risk (Annex III, point 4). Deployers must conduct fundamental rights impact assessments and implement bias monitoring.

The chatbot data leak

In early 2023, a major technology company's employees inadvertently pasted proprietary source code and internal meeting notes into ChatGPT. The data was ingested into OpenAI's systems and could potentially have been used for model training. This led to the company banning ChatGPT for internal use. Under GDPR and the EU AI Act, this type of data exposure would trigger both data breach notification and AI incident reporting if the data included personal information.

The silent fraud detection failure

A European financial institution's AI-powered fraud detection system experienced a model drift that gradually reduced its sensitivity over several months. The degradation was not detected because monitoring focused on system uptime rather than detection accuracy. By the time the issue was discovered, the institution had suffered significant fraud losses. Under the EU AI Act's human oversight requirements, monitoring systems must track not just operational metrics but output quality and accuracy.

Each of these incidents would have been handled better with a structured response plan in place. The organisations that recovered most effectively were those that detected the problem quickly, contained it, and documented their response -- exactly what the EU AI Act compliance framework requires.

Building an AI incident response team

An AI incident response team does not need to be a separate department. For most organisations, it is a cross-functional group that convenes when an incident occurs, with clear roles and escalation paths defined in advance.

Core roles

Incident lead. Coordinates the response, makes decisions about containment and escalation, and owns the timeline. Typically a senior manager or the AI governance owner.

Technical investigator. Analyses the AI system to determine root cause. Could be an internal AI/ML engineer or an external specialist on retainer. Needs access to system logs, model parameters, and deployment configuration.

Legal and compliance advisor. Determines reporting obligations, drafts regulatory notifications, and advises on liability exposure. May be in-house counsel or an external firm with AI Act expertise.

Communications lead. Manages internal and external communications. Drafts statements for affected individuals, coordinates with PR if the incident becomes public, and manages media inquiries.

Data protection officer. If the incident involves personal data, the DPO assesses GDPR implications, coordinates with the data protection authority, and manages data subject notifications.

For smaller organisations

If you are a sole trader or small team, you may fill multiple roles yourself. The key is not the number of people but the clarity of the process. Even a one-person response plan is infinitely better than no plan. Document who does what, who you call for external support, and where the templates and contact details are stored.

Documentation requirements

Every AI incident must produce documentation that serves three purposes: regulatory compliance, operational learning, and legal defence.

What to document

Incident report. Date, time, AI system involved, description of what happened, who detected it, and initial severity assessment.

Impact assessment. Number and identity of affected persons, types of harm caused or risked, fundamental rights implications.

Investigation findings. Root cause analysis, contributing factors, system logs, model outputs that demonstrate the failure.

Response actions. Containment measures, corrective actions, timeline of each action taken, and who authorised them.

Notifications sent. Copies of notifications to the provider, market surveillance authority, data protection authority (if applicable), and affected individuals.

Post-incident review. Lessons learned, changes to the incident response plan, updates to AI governance policies, and training updates.

Practical tip

Create incident report templates before you need them. When an incident occurs, you do not want to be designing forms. Pre-built templates with clear fields ensure nothing is missed and speed up your response. Store them alongside your AI compliance documentation.

Prevention and ongoing monitoring

The best incident response is prevention. While no AI system is infallible, proactive monitoring significantly reduces the frequency and severity of incidents.

Output quality monitoring. Track accuracy, relevance, and consistency of AI outputs over time. Model drift -- where performance degrades gradually -- is one of the most common and insidious failure modes.

Bias auditing. Regularly test AI systems for discriminatory patterns across protected characteristics. Automated bias testing tools can flag issues before they affect real decisions.

Adversarial testing. Periodically test your AI systems for vulnerability to prompt injection, data poisoning, and other adversarial attacks. This is especially important for customer-facing AI.

User feedback loops. Create mechanisms for end users to flag problematic AI outputs easily. A simple thumbs-down button or feedback form can be your most effective early warning system.

Regular system reviews. Schedule quarterly reviews of all AI systems in your inventory. Check that they are performing as expected, that the underlying models have not changed without your knowledge, and that your documentation is current.

As we discuss in our AI adoption guide, the organisations that succeed with AI are not the ones that avoid all problems -- they are the ones that detect and resolve problems quickly.

Learn AI governance, risk management, and incident response. Certificate included.

Need help building your incident response plan?

Talk to our compliance team

Book a free call to discuss your AI incident response requirements and build a plan tailored to your organisation.

Frequently asked questions

What counts as an AI incident under the EU AI Act?+
How quickly must AI incidents be reported?+
Who is responsible for AI incident reporting -- the provider or the deployer?+
Does the EU AI Act protect AI whistleblowers?+
What documentation is required after an AI incident?+
Do we need an AI incident response plan even for low-risk AI systems?+
How does AI incident reporting compare to GDPR breach notification?+

Sources and further reading

Official EU institutional sources and incident reporting frameworks.

EU AI Act full text (Regulation (EU) 2024/1689)
Official Journal of the European Union
Visit
Article 26 -- Obligations of deployers
EU AI Act Service Desk
Visit
Article 62 -- Reporting of serious incidents
EU AI Act Service Desk
Visit
EU Whistleblower Directive (2019/1937)
EUR-Lex
Visit
GDPR Article 33 -- Notification of a personal data breach
EUR-Lex
Visit
Compliance
Compliance pillar
The true cost of AI non-compliance: penalties explained
14 min read
Infrastructure
Infrastructure pillar
Cloud AI vs local AI for European businesses
16 min read
Compliance
Compliance pillar
EU AI Act compliance checklist: 10 steps
14 min read
This article is regularly updated to reflect the latest regulatory guidance from the European Commission, AI Office, and national market surveillance authorities. Last reviewed 27 March 2026. Agentic Fluxus provides EU AI Act compliance training, tools, and infrastructure for organisations of all sizes. Next in the series: EU AI Act penalties explained