Frequently Asked Questions
EU AI Act: Your Questions Answered
Everything you need to know about the EU AI Act, compliance requirements, training obligations, and how to get started. Can not find your answer? Get in touch.
EU AI Act Basics
What is the EU AI Act?
The EU AI Act is the world's first comprehensive law regulating artificial intelligence. Adopted in 2024, it creates a legal framework for developing, deploying, and using AI systems across Europe. It classifies AI systems by risk level and sets obligations accordingly, from banned practices to transparency requirements.
Read our EU AI Act guideWhen does the EU AI Act come into effect?
The EU AI Act entered into force on 1 August 2024, with phased deadlines. Prohibited AI practices were banned from February 2025. AI literacy obligations (Article 4) apply from February 2025. High-risk system requirements phase in through August 2026 and August 2027. Most companies need to act now, not later.
View compliance timelineWho does the EU AI Act apply to?
It applies to any organisation that develops, deploys, imports, or distributes AI systems in the EU market, regardless of where the company is based. If your AI system affects people in the EU, you are in scope. This includes US and UK companies selling into Europe, not just EU-based businesses.
Check if you are affectedWhat are the penalties for non-compliance with the EU AI Act?
Fines can reach up to 35 million euros or 7% of global annual turnover for the most serious violations (banned AI practices). High-risk system violations carry fines up to 15 million euros or 3% of turnover. Providing incorrect information to authorities can result in fines up to 7.5 million euros or 1% of turnover.
Understand the riskDoes the EU AI Act apply outside Europe?
Yes. Like GDPR, the EU AI Act has extraterritorial reach. If you place an AI system on the EU market, or if the output of your AI system is used within the EU, the regulation applies to you. Companies in the US, UK, Asia, and elsewhere must comply when serving EU customers or users.
Get compliance trainingWhat is an AI system under the EU AI Act?
The Act defines an AI system as a machine-based system designed to operate with varying levels of autonomy, that may exhibit adaptiveness, and that infers from inputs to generate outputs such as predictions, content, recommendations, or decisions. This is broadly aligned with the OECD definition and covers most modern AI tools.
Classify your AI systemsWhat are the different risk categories in the EU AI Act?
The Act uses four risk tiers: unacceptable risk (banned outright, e.g. social scoring), high risk (strict requirements, e.g. AI in hiring or credit scoring), limited risk (transparency obligations, e.g. chatbots must disclose they are AI), and minimal risk (no specific obligations, e.g. spam filters). Most business AI falls under limited or high risk.
Assess your risk levelIs ChatGPT covered by the EU AI Act?
Yes. General-purpose AI models like GPT-4, Claude, and Gemini fall under specific provisions for GPAI (General Purpose AI) models. Providers must publish training data summaries, comply with copyright rules, and, if their models pose systemic risk, conduct adversarial testing and report incidents. Companies using these tools also have obligations.
Learn about GPAI rulesWhat is the difference between a provider and a deployer?
A provider develops or commissions an AI system and places it on the market. A deployer is any organisation using an AI system under its authority (for example, a company using an AI hiring tool built by someone else). Both have obligations, but they differ: providers handle technical compliance while deployers focus on use, oversight, and transparency.
Check your roleCompliance & Training
What is Article 4 AI literacy?
Article 4 of the EU AI Act requires organisations to ensure that all staff involved with AI systems have a sufficient level of AI literacy. This applies from February 2025. It covers technical knowledge, understanding of risks, awareness of legal obligations, and the ability to interpret AI outputs appropriately for the context.
Article 4 training courseDo I need to train my staff on AI?
Yes, if your staff interact with, deploy, or oversee AI systems. Article 4 makes AI literacy a legal requirement, not a nice-to-have. The level of training needed depends on the role: a marketing team using AI copywriting tools needs different knowledge than an engineer building ML models. All staff need baseline awareness.
Staff awareness courseWhat training do managers need for the EU AI Act?
Managers and decision-makers need to understand AI governance, risk assessment, and their legal responsibilities as deployers. They must be able to evaluate whether AI systems are being used appropriately and ensure proper human oversight. Our Manager course covers governance frameworks, risk classification, and compliance planning.
Manager course detailsHow long does EU AI Act compliance training take?
Our Staff AI Awareness course takes approximately 1.5 hours to complete. The Manager course runs around 2.5 hours. Both are self-paced online courses you can start and stop as needed. Enterprise on-site training is typically a half-day or full-day session depending on team size and depth required.
Browse all coursesWhat certificates does Agentic Fluxus offer?
Every course includes a verifiable completion certificate. The Staff AI Awareness Certificate proves baseline Article 4 compliance. The Manager AI Governance Certificate demonstrates deployer-level understanding. Certificates include a unique verification code, completion date, and can be shared on LinkedIn or presented to auditors.
View certificatesCan I train my entire team with Agentic Fluxus?
Yes. We offer team and enterprise plans with volume discounts, a central admin dashboard, and completion tracking. For teams of 50 or more, we also offer custom on-site training at your location. You get reporting tools to demonstrate Article 4 compliance across your entire organisation.
Enterprise trainingIs online AI compliance training sufficient for the EU AI Act?
For most roles, yes. The EU AI Act does not mandate specific training formats; it requires that staff have sufficient AI literacy relative to their role. Online courses that cover the right material and result in verifiable certification are widely accepted. High-risk system operators may benefit from additional hands-on workshops.
Compare training optionsHow do I prove my company has met Article 4 requirements?
Keep records of who was trained, when, on what material, and their certification status. Agentic Fluxus provides a compliance dashboard showing completion rates across your organisation. Maintain a training register as part of your broader AI governance documentation. Auditors will want to see evidence of systematic AI literacy efforts.
Start compliance trackingDo freelancers and sole traders need AI compliance training?
If you use AI systems in your work, even tools like ChatGPT, Midjourney, or AI-powered analytics, Article 4 applies to you. As a sole trader, you are both the provider and deployer of your AI use. Our Staff course at 39 euros is designed to give individual professionals everything they need to comply.
Individual trainingAbout Agentic Fluxus
What is Agentic Fluxus?
Agentic Fluxus is an Amsterdam-based AI compliance training platform. We help companies across Europe understand and comply with the EU AI Act through practical, certified online courses, in-person workshops, and enterprise training programmes. Our approach makes regulation accessible rather than intimidating.
About usWhere is Agentic Fluxus based?
We are based in Amsterdam, the Netherlands, at A'DAM Tower in Amsterdam-Noord. Being based in the EU gives us direct proximity to regulatory developments, and our in-person workshops take place at our Amsterdam location. We serve companies across all 27 EU member states and beyond.
Visit our officeWhat courses does Agentic Fluxus offer?
We offer three online tracks: Staff AI Awareness (3.5 hours, 39 euros) for all employees, Manager AI Awareness (6.5 hours, 59 euros) for deployers and decision-makers, and Director AI Awareness (8 hours, 99 euros) for business-level AI posture. Our in-person Leadership & Decision Makers session runs at A'DAM Tower, Amsterdam. We also run custom enterprise sessions.
View all coursesHow much do Agentic Fluxus courses cost?
Staff AI Awareness is 39 euros per person. Manager AI Awareness is 59 euros per person. Director AI Awareness is 99 euros per person. The in-person Leadership & Decision Makers session is 995 euros per seat. Enterprise and team pricing includes volume discounts: the more people you train, the lower the per-person cost. Contact us for a custom quote.
See pricingDoes Agentic Fluxus offer group or enterprise discounts?
Yes. We offer tiered volume pricing for teams of 10 or more. Enterprise packages for 50-plus employees include dedicated account management, custom training content, on-site delivery options, and a compliance dashboard. Discounts range from 15% to 40% depending on team size.
Get enterprise pricingWhat is the AI Readiness Audit?
Our free AI Readiness Checker is a 5-minute online assessment that evaluates how prepared your business is for the EU AI Act. It analyses your current AI usage, identifies risk areas, and gives you a personalised compliance score with recommended next steps. Over 2,000 businesses have used it so far.
Take the free auditWho is Flux, your AI mascot?
Flux is our AI-powered compliance assistant and the friendly face of the platform. Flux guides you through courses, answers questions about the EU AI Act, and makes compliance feel approachable rather than overwhelming. Think of Flux as your knowledgeable companion for navigating AI regulation.
Chat with FluxTechnical Compliance
What is a high-risk AI system under the EU AI Act?
High-risk AI systems are those used in sensitive areas listed in Annex III of the Act, including biometric identification, critical infrastructure management, employment and worker management, access to essential services (credit scoring, insurance), law enforcement, and immigration. These systems must meet strict requirements for data quality, documentation, human oversight, and accuracy.
Risk assessment toolWhat AI practices are prohibited under the EU AI Act?
The Act bans: social scoring by governments, real-time remote biometric identification in public spaces (with limited exceptions), AI that exploits vulnerabilities of specific groups, AI that manipulates human behaviour to cause harm, emotion recognition in workplaces and educational institutions, and untargeted scraping of facial images for facial recognition databases.
Full list of bansWhat documentation do I need for EU AI Act compliance?
High-risk system providers need technical documentation covering system design, development process, risk management, data governance, testing results, and conformity assessment. Deployers need records of their fundamental rights impact assessment, human oversight procedures, and usage logs. All organisations need evidence of AI literacy training (Article 4).
Documentation templatesHow do I classify my AI system's risk level?
Start by checking if your AI system falls under any prohibited category. Then review Annex III of the EU AI Act, which lists high-risk use cases by sector. If your system does not appear in Annex III and is not a GPAI model, it likely falls under limited or minimal risk. Our free AI Readiness Checker can help you classify your systems in under 5 minutes.
Classify your system nowWhat are deployer obligations under the EU AI Act?
Deployers of high-risk AI systems must: use systems according to instructions, ensure human oversight by trained personnel, monitor system operation, keep logs automatically generated by the system, conduct a fundamental rights impact assessment, inform affected individuals, and cooperate with authorities. Deployers of any AI system must ensure AI literacy (Article 4).
Deployer training courseWhat is a conformity assessment for AI systems?
A conformity assessment is the process of verifying that a high-risk AI system meets all requirements before it can be placed on the EU market. For most high-risk systems, providers can self-assess. But AI used in biometric identification and critical infrastructure requires assessment by an independent notified body. Think of it as a CE marking process for AI.
Learn about assessmentsDo I need to register my AI system in the EU database?
Providers and deployers of high-risk AI systems must register them in the EU database before placing them on the market or putting them into service. The registration includes system identification, provider details, intended purpose, risk category, and conformity assessment status. The database is publicly accessible to promote transparency.
Registration guidanceWhat are the rules for general-purpose AI (GPAI) models?
GPAI model providers must maintain technical documentation, provide information to downstream providers, comply with copyright law, and publish a training data summary. Models posing systemic risk (over 10^25 FLOPs training compute) face additional requirements: adversarial testing, incident monitoring, cybersecurity protections, and energy efficiency reporting.
GPAI compliance guideStill have questions?
Talk to our team or let Flux, our AI compliance assistant, help you find answers instantly.