AGENTICFLUXUS

Search

Search pages, courses, and articles

EU AI Act compliance checklist -- the compliance police are on their way
Knowledge Library/Compliance/EU AI Act compliance checklist
ComplianceArt. 4Art. 6Art. 27Free

EU AI Act compliance checklist: 10 steps before August 2026

A practical 10-step compliance checklist for the EU AI Act. From AI inventory to ongoing monitoring -- everything your business needs before enforcement begins.

Summary

The EU AI Act is now law, with the first obligations already in force and full enforcement starting August 2026. Most organisations know they need to comply but have no practical roadmap. This guide breaks compliance into 10 concrete steps -- from inventorying your AI systems to establishing ongoing monitoring. Each step cites the specific regulation articles, explains what is required, and gives you something actionable to do this week. Whether you are a sole trader using AI writing tools or an enterprise deploying automated decision-making, this checklist scales to your situation.

AF
Agentic Fluxus
EU AI Act compliance specialist · Published 25 Mar 2026 · 14 min
10 Steps
Compliance checklist
Key takeaways
1

Compliance is a process, not an event. You cannot achieve it in a single sprint. Start with an inventory and risk classification, then build from there.

2

Most companies are deployers, not providers. Your obligations are real but different from those of companies building AI systems. Understanding your role saves time and money.

3

August 2026 is the enforcement deadline. High-risk AI system obligations and national enforcement begin. The AI literacy obligation under Article 4 is already in force since February 2025.

4

Existing GDPR processes give you a head start. Data protection impact assessments, documentation practices, and your DPO role all translate to AI Act compliance.

In this guide
1

Inventory every AI system in your organisation

You cannot comply with a regulation about AI systems if you do not know which AI systems you have. This sounds obvious, but it is the step most organisations skip -- or underestimate. The EU AI Act's definition of an AI system (Article 3(1)) is broad: a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that infers how to generate outputs from the input it receives.

That definition captures far more than the tools your teams consciously think of as "AI." It includes the recommendation engine in your CRM, the candidate scoring module in your HR platform, the predictive text in your email client, the fraud detection layer in your payment processor, and the automated routing in your customer support system. If a tool learns from data, makes predictions, or generates content, it is likely in scope.

'AI system' means a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.

EU AI Act, Article 3(1), Regulation (EU) 2024/1689

How to run the inventory

Go department by department. For each team, ask one question: does any tool you use learn from data, make predictions, generate content, or automate decisions?

Marketing. AI writing assistants (Jasper, Copy.ai, ChatGPT), programmatic advertising platforms with automated bidding, social media scheduling tools with predictive engagement features, email platforms with send-time optimisation or subject line generation.

Human resources. CV screening tools, applicant tracking systems with candidate ranking, employee sentiment analysis, automated interview scheduling with calendar prediction, performance review platforms with scoring algorithms.

Finance. Fraud detection systems, automated invoice processing, expense categorisation tools, cash flow forecasting, credit risk scoring for clients.

Customer service. Chatbots and virtual assistants, ticket routing and prioritisation systems, sentiment analysis on incoming messages, automated email response generators.

IT and operations. Network anomaly detection, predictive maintenance systems, automated security threat detection, code completion tools (GitHub Copilot, Cursor), log analysis with pattern recognition.

Legal and compliance. Contract analysis tools, regulatory change monitoring, document review with AI extraction, e-discovery platforms with predictive coding.

The hidden AI problem

The biggest risk in this step is AI you do not know about. Shadow AI -- tools adopted by individual employees without IT approval -- is widespread. A 2025 survey by the OECD found that over 60% of employees in EU organisations use AI tools that their employer has not formally sanctioned. Your inventory must account for this. Consider an anonymous survey, check browser extension lists, and review SaaS subscriptions.

Document each system with at minimum: the tool name, the vendor, which department uses it, what it does, what data it processes, and who is responsible for it. A spreadsheet works. The format does not matter -- the completeness does.

Not sure where you stand?
Our free readiness assessment maps your current AI exposure in 10 minutes.
2

Classify each system by risk level

The EU AI Act uses a risk-based framework. Not all AI systems are treated equally. Article 6 and Annex III establish four tiers, and where your systems fall determines your compliance obligations. Get the classification wrong and you either over-invest in compliance for low-risk tools or, worse, under-invest for high-risk ones.

The four risk tiers

Prohibited (Article 5). AI systems banned outright in the EU. Social scoring by public authorities. Real-time remote biometric identification in public spaces for law enforcement (with narrow exceptions). Manipulation techniques that exploit vulnerabilities. Emotion recognition in workplaces and educational institutions. Untargeted scraping of facial images to build recognition databases. These prohibitions are already in force since February 2025.

High-risk (Article 6 + Annex III). AI systems subject to the strictest compliance requirements. Includes: biometric identification and categorisation, management and operation of critical infrastructure, education and vocational training (access and assessment), employment (recruitment, promotion, termination decisions), access to essential services (credit scoring, insurance pricing), law enforcement, migration and border control, and administration of justice. If your AI system makes or materially influences decisions about people in these domains, it is likely high-risk.

Limited risk (Article 50). AI systems with transparency obligations. Chatbots must disclose that users are interacting with AI. AI-generated content (deepfakes, synthetic text) must be labelled as such. Emotion recognition systems must inform subjects. These are disclosure obligations, not conformity assessments.

Minimal risk. Everything else. Spam filters, AI-assisted spell checking, recommendation engines for non-essential services, predictive text. No specific obligations under the AI Act beyond the baseline Article 4 AI literacy requirement.

Decision tree for classification

For each AI system in your inventory, work through these questions in order:

Does the system fall under any Article 5 prohibition?
Yes: stop deployment immediately. These are banned.
No: proceed to the next question.
Does the system operate in any Annex III domain?
Yes: it is likely high-risk. Proceed to full conformity assessment.
No: proceed to the next question.
Does the system interact directly with people or generate content?
Yes: limited risk. Apply transparency obligations (Article 50).
No: minimal risk. Baseline AI literacy (Article 4) applies.

Real-world classification examples

Your company uses an AI-powered CV screening tool. Employment and recruitment is listed in Annex III, Category 4. This is a high-risk system. You need a conformity assessment, fundamental rights impact assessment, human oversight, and technical documentation from the provider.

Your marketing team uses ChatGPT to draft social media posts. This is a general-purpose AI system used for content generation. It falls under limited risk -- you need to disclose that content is AI-generated. The GPAI provider (OpenAI) has separate obligations.

Your IT team uses an AI spam filter on your email server. Minimal risk. No specific AI Act obligations beyond Article 4 AI literacy for the staff who configure and oversee it.

Your call centre uses emotion recognition to assess customer sentiment. Emotion recognition in the workplace is prohibited under Article 5(1)(f). If you use it to evaluate employees, stop. If used solely to analyse customer calls (not employees), it falls under limited risk with mandatory disclosure.

Important

Classification is not static. If you change how a system is used -- for example, using a general chatbot to screen job applicants -- its risk classification changes. Reassess whenever the use case changes, not just at scheduled reviews.

Office raided for deploying high-risk AI without a conformity assessment -- should have done the checklist first
3

Determine your role under the AI Act

The EU AI Act assigns different obligations depending on your relationship to the AI system. Article 3 defines four roles: provider, deployer, importer, and distributor. Most companies reading this guide are deployers -- you use AI systems built by others. But your exact role matters because it determines which articles apply to you.

Provider (Article 3(3)). Develops or commissions an AI system and places it on the market or puts it into service under their own name or trademark. If you build AI models or products, you are a provider. Providers carry the heaviest obligations: conformity assessments, technical documentation, post-market monitoring, quality management systems.

Deployer (Article 3(4)). Uses an AI system under their authority, except for personal non-professional activity. If you buy, license, or subscribe to AI tools for your business, you are a deployer. This is the most common role. Deployer obligations include: fundamental rights impact assessments for high-risk systems (Article 27), human oversight (Article 26), transparency with affected persons, incident reporting, and AI literacy (Article 4).

Importer (Article 3(6)). Places an AI system from a third country on the EU market. Relevant if you bring non-EU AI products into Europe for distribution.

Distributor (Article 3(7)). Makes an AI system available on the EU market without being a provider or importer. Resellers and marketplace operators fall here.

When deployers become providers

Article 25 is critical here. If you substantially modify a high-risk AI system, or if you put your own name or trademark on it, you become a provider for the purposes of that system -- inheriting all provider obligations. Using an AI tool exactly as the vendor intended keeps you in the deployer lane. Customising it significantly can shift you into provider territory.

For each AI system in your inventory, document your role. In most cases, your vendor is the provider and you are the deployer. But check: do you fine-tune models? Do you integrate AI APIs into your own products? Do you rebrand AI tools? Any of these can change your classification.

4

Assign governance ownership

Someone in your organisation needs to own AI compliance. Article 4 and Recital 20 of the AI Act make it clear that AI literacy -- and by extension, governance -- is an organisational responsibility, not something that happens by accident. The regulation does not prescribe a specific role or title, but it does require that measures are taken and that they are documented.

This does not mean you need a new hire. In most organisations, AI governance can be assigned to an existing role:

Data Protection Officer (DPO). Already understands regulatory compliance, data handling, and impact assessments. Natural extension to AI governance, especially where AI systems process personal data.

Chief Technology Officer (CTO). Understands the technical architecture and can assess AI system risks from an engineering perspective. Best fit in technology-driven organisations.

Compliance lead or General Counsel. Handles regulatory obligations and vendor contract management. Strong fit for organisations where AI risk is primarily a legal and contractual concern.

Chief Information Security Officer (CISO). Already manages IT risk, incident response, and audit trails. Logical extension in organisations where AI security and resilience are primary concerns.

What matters is that the assignment is documented, the person has the authority to make decisions, and they have access to the resources they need. Recital 20 emphasises that AI literacy measures should be proportionate, but they must be demonstrable. A verbal agreement that "someone is handling it" will not survive regulatory scrutiny.

Minimum governance documentation

Named responsible person. Title, name, and reporting line. If the role is shared (for example, DPO and AI governance), document the allocation of time and responsibilities.

Scope of authority. What decisions can this person make? Can they halt deployment of an AI system? Can they require vendor contract changes? Can they mandate training?

Reporting structure. Who does the AI governance lead report to? How often? In what format?

Budget and resources. What budget is allocated for AI compliance? What tools, external support, or training does the governance lead have access to?

5

Conduct impact assessments for high-risk systems

If your risk classification (step 2) identified any high-risk AI systems, Article 27 requires deployers to carry out a fundamental rights impact assessment (FRIA) before putting those systems into use. This is not optional and it is not a formality. It is a structured analysis of how the AI system could affect the fundamental rights of the people it touches.

Prior to deploying a high-risk AI system referred to in Article 6(2), deployers that are bodies governed by public law, or are private entities providing public services, and deployers of high-risk AI systems referred to in points 5(b) and 5(c) of Annex III, shall perform an assessment of the impact on fundamental rights that the use of such system may produce.

EU AI Act, Article 27(1), Regulation (EU) 2024/1689

What the assessment must cover

Description of the deployer's processes. How will the AI system be used? What decisions will it inform or automate? What is the human role in the loop?

Period and frequency of use. Is this continuous operation or periodic? How many people will be affected and over what timeframe?

Categories of affected persons. Who is subject to the AI system's outputs? Employees, customers, members of the public? Vulnerable groups require heightened scrutiny.

Specific risks of harm. What could go wrong? Discrimination, privacy violation, denial of services, safety risks? Be concrete, not generic.

Human oversight measures. What oversight mechanisms are in place? Who monitors the system? How do they intervene? What triggers intervention?

Mitigation measures. What steps have you taken to reduce identified risks? Technical safeguards, procedural controls, appeal mechanisms?

Timing matters. Conduct the assessment before deployment, not after. Reassess after any significant change to the system, the data it processes, or the context in which it operates. If your vendor pushes a major model update, that warrants reassessment.

Practical tip

If you already conduct Data Protection Impact Assessments (DPIAs) under GDPR, use them as a starting point. The structure is similar. The key addition is the focus on fundamental rights beyond data protection: non-discrimination, access to services, human dignity, and effective remedy.

6

Update vendor contracts

If you deploy AI systems built by others -- and most organisations do -- your vendor relationships are a critical compliance surface. Article 25 establishes that deployers have their own obligations, but many of those obligations depend on information and documentation that only the provider can supply. If your contracts do not require your vendors to provide what you need, you are building compliance on an unstable foundation.

What to require from AI vendors

EU Declaration of Conformity. For high-risk AI systems, the provider must issue a declaration of conformity under Article 47. Your contract should require the vendor to provide this before deployment and to update it after any material changes.

Technical documentation. Article 11 requires providers of high-risk AI systems to produce detailed technical documentation. As a deployer, you need access to enough of this documentation to understand how the system works, what data it was trained on, its known limitations, and its performance metrics.

Instructions for use. Article 13 requires providers to supply clear instructions. These should cover intended purpose, known risks, human oversight requirements, and performance specifications. If your vendor's documentation is vague or missing, that is a red flag.

Incident notification. Your contract should require the vendor to notify you of any incidents, malfunctions, or risks identified after deployment. Article 62 requires providers to report serious incidents to authorities, but you need parallel visibility as the deployer.

Cooperation with audits. Build in the right to audit or request evidence of the vendor's compliance with their AI Act obligations. If a national authority investigates your use of a high-risk system, you need your vendor's cooperation.

Contract clause template

A minimum viable clause for AI vendor contracts: "The Provider warrants that the AI System complies with Regulation (EU) 2024/1689 (EU AI Act) and shall provide the Deployer with: (a) the EU Declaration of Conformity where applicable, (b) technical documentation sufficient for the Deployer to fulfil its obligations under Articles 26 and 27, (c) instructions for use as required by Article 13, and (d) prompt notification of any incidents, malfunctions, or post-market findings relevant to the Deployer's use of the AI System."

Review every AI vendor contract in your portfolio. Prioritise high-risk systems, but do not ignore limited-risk ones -- even minimal-risk tools should have clear terms about data handling, system updates, and notification of material changes. If a vendor refuses to provide basic compliance documentation, consider whether they are a suitable partner for AI deployment in the EU.

7

Build your training programme

Article 4 of the EU AI Act requires all providers and deployers to ensure sufficient AI literacy among their staff and other persons dealing with AI systems on their behalf. This obligation has been in force since 2 February 2025. It is not waiting for August 2026 -- it applies now.

The challenge is that "sufficient AI literacy" means different things for different roles. A warehouse operator using an AI-powered inventory system needs different knowledge than a compliance officer overseeing AI deployment across the organisation. The regulation accounts for this through proportionality: training must reflect the person's technical knowledge, experience, and the context in which the AI system is used.

A role-based approach

All staff -- awareness level. Everyone in the organisation should understand: what AI is and is not, which AI systems the organisation uses, their basic rights and responsibilities under the AI Act, how to recognise when AI is being used, and who to contact with concerns. This can be delivered through online modules in 2 to 3 hours.

Managers -- risk and governance level. People who approve AI tools, manage teams using AI, or make procurement decisions need deeper knowledge: risk classification, vendor assessment criteria, documentation obligations, oversight responsibilities, and how to escalate incidents. Half-day to full-day training, with scenario-based exercises.

Operators -- technical oversight level. Staff who directly configure, monitor, or intervene in AI system operation need the most intensive training: system-specific technical understanding, human oversight protocols, incident detection and response, logging and documentation requirements, and the specific obligations for their system's risk category. One to two days of training, with hands-on practice.

We wrote a comprehensive guide on this topic: How to Train Your Workforce for EU AI Act Compliance. It covers the Article 4 legal text in detail, the Commission's official guidance, role-based training paths, and implementation timelines.

Where Agentic Fluxus comes in

This is what we built our platform for. Our staff awareness course covers the baseline literacy every employee needs. Our manager course goes deeper into risk assessment and governance. Each course is designed around the EU AI Act's actual requirements, not generic AI training. And because we are an AI-native platform, we practice what we teach.

Ready to train your team?
Our courses are built around the EU AI Act's actual requirements. Role-based. Practical. Compliant.
8

Create documentation and audit trails

Compliance without documentation is not compliance. Articles 12 and 14 of the AI Act establish record-keeping and human oversight requirements that apply to high-risk AI systems, but the principle extends further: Article 4's AI literacy obligation is only credible if you can demonstrate what measures you took.

Think of documentation as your compliance evidence chain. If a national authority asks to see your AI Act compliance programme, you need to produce a coherent set of records that connects your AI inventory to your risk classifications, your governance structure, your vendor assessments, your training programme, and your monitoring activities.

What records to keep

AI system inventory. The complete list from step 1, kept current. Version history showing when systems were added, removed, or reclassified.

Risk classification records. For each system, the classification rationale, the date of assessment, and the person who performed it.

Governance documentation. The named AI governance lead, their authority, reporting structure, and any governance board minutes.

Fundamental rights impact assessments. For high-risk systems, the full FRIA documentation as required by Article 27. Keep drafts and final versions.

Vendor compliance records. Declarations of conformity, technical documentation extracts, contract clauses, and correspondence about compliance.

Training records. Who was trained, when, on what content, with what assessment results. Completion certificates. Programme review dates.

Incident logs. Any AI-related incidents, near-misses, complaints, or concerns raised. Investigation outcomes and remedial actions taken.

Monitoring and review records. Quarterly review notes, annual reassessment results, system performance audits, and any changes triggered by monitoring.

Retention and format

The AI Act does not prescribe a specific document format. What matters is that records are complete, accessible, and auditable. Use whatever system your organisation already uses for compliance records -- document management systems, shared drives, GRC platforms. The key requirement under Article 12 is that high-risk AI system logs are kept for a period appropriate to the intended purpose, and for at least six months unless otherwise specified by Union or national law.

For training records specifically, retain them for at least the duration of the AI system's deployment plus two years. If a regulatory inquiry occurs, you will need to demonstrate what training was in place at the time of any incident, not just what you have now.

9

Establish incident response procedures

AI systems will malfunction, produce biased outputs, or cause unintended harm. The question is not whether, but when. Article 26(5) requires deployers of high-risk AI systems to report serious incidents to the relevant market surveillance authority. But even if your systems are not classified as high-risk, having incident response procedures is both prudent risk management and evidence of responsible AI governance.

What counts as an incident?

The AI Act defines a "serious incident" (Article 3(49)) as an incident or malfunctioning of an AI system that directly or indirectly leads to: death or serious damage to health, serious and irreversible disruption of critical infrastructure management, breach of fundamental rights obligations, or serious damage to property or the environment. But your internal procedures should cover a broader range of events:

Discriminatory outputs. The AI system produces results that systematically disadvantage a protected group. Example: a recruitment tool consistently ranks female candidates lower.

Accuracy failures. The system produces incorrect outputs that lead to real-world consequences. Example: a credit scoring system miscalculates risk, leading to wrongful denial of services.

Privacy violations. The AI system processes personal data in ways not covered by its intended purpose or your DPIA. Example: a chatbot retains and surfaces customer data from unrelated conversations.

Safety risks. The system's outputs create physical safety concerns. Example: a predictive maintenance system fails to flag a critical equipment failure.

Transparency failures. Users are not informed they are interacting with AI, or AI-generated content is not labelled as such.

Building your incident response plan

Detection. How will incidents be identified? Automated monitoring, user complaints, staff reports, regular audits? Define the channels and make them accessible.

Assessment. Who assesses the severity? Use a clear severity matrix: low (no harm, system corrected), medium (potential harm, requires investigation), high (actual harm, requires immediate action and reporting).

Escalation. Define escalation paths. Who gets notified at each severity level? When does the AI governance lead get involved? When does the board need to know?

Reporting. For serious incidents involving high-risk AI systems, Article 26(5) requires notification to the market surveillance authority without undue delay and no later than 15 days. Document the reporting process, including which authority to contact in your member state.

Remediation. What happens after an incident? Root cause analysis, system adjustments, additional training, vendor communication, affected person notification.

Documentation. Every incident, from detection through remediation, must be documented. This feeds into your audit trail (step 8) and informs your monitoring reviews (step 10).

Whistleblower protection

Article 87 of the AI Act explicitly provides for the protection of persons reporting infringements. Staff who raise concerns about AI system compliance -- internally or to authorities -- are protected. Make sure your incident response procedures include a clear, safe channel for employees to report AI-related concerns without fear of retaliation.

10

Set up ongoing monitoring and review

Compliance is not a one-time exercise. Article 26(1) requires deployers of high-risk AI systems to use those systems in accordance with the instructions of use and to take appropriate technical and organisational measures. In practice, this means ongoing monitoring -- of the systems themselves, the regulatory landscape, and your own compliance programme.

The EU AI Act is a living regulatory framework. The Commission can update the list of high-risk AI systems in Annex III. National authorities are still developing their enforcement approaches. Standards bodies are publishing technical standards. A compliance programme that is not designed to evolve will become outdated within months.

What ongoing monitoring looks like

System performance monitoring. Track the accuracy, fairness, and reliability of your AI systems over time. Models drift. Data distributions change. A system that was compliant at deployment may not be compliant six months later if it has been learning from new data without oversight.

Quarterly reviews. Every three months, review your AI inventory, risk classifications, and governance documentation. Has anything changed? New tools adopted? Existing tools used differently? Staff changes in key oversight roles?

Annual reassessments. Once a year, conduct a full reassessment: repeat the inventory, reclassify systems if needed, update impact assessments, refresh training content, and review vendor compliance. Treat this as your annual AI Act health check.

Regulatory update tracking. Follow developments from the European Commission, the AI Office, and your national market surveillance authority. The Commission publishes implementing and delegated acts that can change obligations. Harmonised standards will be published progressively. Stay current or delegate this to a specialist.

Training programme refresh. AI literacy is not a one-and-done exercise. New staff need onboarding. Existing staff need refresher training when systems change, roles change, or new regulatory guidance is published. Build annual refreshers into your training programme.

Use our AI system checker to validate whether your deployed tools still meet compliance requirements. It cross-references your system details against the current AI Act obligations.

Quarterly
Review cycle
6 months
Minimum log retention
Annual
Full reassessment
The compliance cycle

Think of these 10 steps not as a linear checklist but as a cycle. Your monitoring (step 10) feeds back into your inventory (step 1). New tools trigger new classifications (step 2). Regulatory changes require governance updates (step 4) and training refreshes (step 7). Incidents (step 9) inform your impact assessments (step 5). Build the cycle into your organisational rhythm and compliance becomes maintenance, not a project.

The judge reviewing your AI compliance documentation -- or lack thereof

Frequently asked questions

What happens if we miss the August 2026 deadline?+
Do small companies need to follow all 10 steps?+
How long does full compliance take?+
Can we use our existing GDPR processes?+
What's the penalty for non-compliance?+

Sources and further reading

Official EU institutional sources. We recommend consulting them directly for the most current guidance.

EU AI Act full text (Regulation (EU) 2024/1689)
Official Journal of the European Union
Visit
Annex III: High-risk AI systems
EU AI Act, Regulation (EU) 2024/1689
Visit
Fundamental Rights Impact Assessment guidance
European Commission, DG CONNECT
Visit
AI Literacy: Questions & Answers (May 2025)
European Commission, DG CONNECT
Visit
EU AI Act overview and timeline
European Commission, Shaping Europe's Digital Future
Visit
AI Pact: compliance practices repository
European AI Office
Visit
Need help with compliance?

Talk to our team

Book a free call to discuss your organisation's EU AI Act obligations and build a compliance roadmap tailored to your situation.

People
People pillar
How to train your workforce for EU AI Act compliance
22 min read
Tools
Tools pillar
Is your AI tool high-risk under the EU AI Act?
16 min read
Tools
Tools pillar
OpenClaw for business: the right way to deploy AI agents
20 min read
This article is regularly updated to reflect the latest regulatory guidance from the European Commission, AI Office, and national market surveillance authorities. Last reviewed 25 March 2026. Agentic Fluxus provides EU AI Act compliance training, tools, and infrastructure for organisations of all sizes. Read our AI literacy training guide