Search pages, courses, and articles
The EU AI Act in plain English for the people who actually have to comply with it. Article 4 literacy, risk tiers, Article 50 transparency, GPAI obligations, key dates, and an honest action plan for this quarter.
Last updated 2026-05-09. Plain-English explanation, not legal advice. For specific situations, consult counsel and the official Act text.
2 August 2026·Article 6 + Annex III obligations apply in full to AI systems placed on the EU market after this date.
The Act is the world’s first comprehensive AI regulation. Understanding why it exists helps you read its obligations less as bureaucratic burden and more as a predictable framework for using AI in Europe.
The Act creates a uniform legal framework so AI systems sold or used across all 27 EU member states meet the same baseline of safety, transparency, and human oversight. Without it, every country was drifting toward its own rules — bad for businesses, worse for users.
AI systems that affect employment, credit, education, public services, and law enforcement can entrench bias and remove human judgement at exactly the wrong moments. The Act forces those systems through a risk-management discipline before they touch real people.
Both providers (the people building AI) and deployers (the businesses using AI) now have a checklist of obligations proportionate to the risk tier. That removes the 'is this OK?' anxiety that previously slowed responsible deployment.
Article 4 says everyone touching AI at work must understand what they're touching. That makes AI literacy a normal staff skill in 2026, the way data-protection literacy became normal after GDPR in 2018.
The Act doesn’t treat all AI the same. Obligations scale with the potential for harm. Most everyday business AI sits in the bottom two tiers; the heaviest obligations apply to a narrow set of high-risk uses.
Examples: Social scoring by public authorities. Real-time remote biometric ID in public for law enforcement (with narrow exceptions). Emotion recognition in workplaces and schools. Manipulative subliminal techniques.
What you must do: If your system fits, stop using it.
Examples: Recruitment screening. Credit scoring. Critical infrastructure. Education access decisions. Border control. Biometric categorisation.
What you must do: Heavy obligations: risk management system, data governance, technical documentation, logging, transparency to deployers, human oversight, accuracy + robustness + cybersecurity. Conformity assessment before market placement.
Examples: Chatbots that interact with people. Deepfakes. AI-generated content. Emotion recognition where allowed.
What you must do: Transparency: tell users they're interacting with AI. Label deepfakes. Mark synthetic content as machine-generated.
Examples: Most everyday business AI: spam filters, AI-assisted spreadsheet work, generic recommendation engines, document summarisation.
What you must do: No specific obligations under the Act, but Article 4 (literacy) and your sector's existing rules still apply.
The Act distinguishes between people who build AI, people who use AI at work, and the supply-chain in between. Identifying which role(s) your organisation plays is step one — you can be more than one at once.
You build, train, or develop the AI system itself, then place it on the EU market or put it into service.
Examples: Foundation-model labs. SaaS vendors shipping AI features. In-house teams that productise an AI system across the org.
Obligations: Heaviest. Conformity assessment, technical documentation, post-market monitoring, ongoing risk management.
You use someone else's AI system at work — to make decisions, automate workflows, or serve customers.
Examples: An HR team running ChatGPT for recruitment shortlisting. A legal team using an AI contract reviewer. A retailer running a chatbot.
Obligations: Article 4 staff literacy. Article 50 transparency where applicable. For high-risk AI: human oversight, monitoring, logging, instruct-of-use compliance.
You bring a non-EU AI system into the EU market on behalf of a third-country provider.
Examples: An EU distributor reselling a US-built AI product. A reseller importing AI-equipped hardware.
Obligations: Verify the provider's conformity, keep documentation, cooperate with regulators.
You make an AI system available on the EU market without changing it (you're a reseller in the supply chain, not the maker).
Examples: Marketplaces, value-added resellers, integration partners.
Obligations: Verify CE marking + accompanying documentation, cooperate with regulators if issues arise.
EU-established entity appointed by a non-EU provider to handle their EU compliance obligations.
Examples: EU-based legal entity acting on behalf of a US AI company.
Obligations: Hold technical documentation, register the AI system in the EU database where required, cooperate with national competent authorities.
Most operational businesses are deployers: you use AI tools at work without building them yourselves. Article 4 staff literacy and Article 50 transparency are your two big ones.
Article 4 (paraphrased)
Providers and deployers of AI systems shall take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf, taking into account their technical knowledge, experience, education and training, and the context the AI systems are to be used in.
In plain terms: if your team uses AI tools at work, you have to make sure they understand what those tools do, what limits they have, and when to involve a human. Training must be proportionate to the role. Someone running an experiment in a sandbox needs less than someone deploying AI in a customer-facing flow.
The Act is silent on how you prove this. The cleanest defensible answer in 2026 is per-learner training with dated, verifiable certificates. That gives you, on the day a regulator asks, a list: who was trained, when, on what, and a public URL the regulator can use to verify the certificate is real and unrevoked.
Generic videos played at a kickoff meeting do not pass this bar. Per-learner per-cert evidence does.
If you operate a chatbot, an AI-driven phone system, or any AI that interacts with people, Article 50 says you must disclose that they are interacting with an AI system, unless it’s already obvious from context.
For deepfakes and AI-generated content, the disclosure rule is stricter: synthetic media must be marked as such. For text content, you must disclose AI generation when the text is published with the purpose of informing the public on matters of public interest, with specific exceptions.
The disclosure itself is usually a one-line UI notice. The harder part is making sure it actually appears on every surface where AI interacts with humans, including third-party plugins and embedded widgets.
The EU AI Act formally became law. Most provisions phase in over the following 24 to 36 months.
Two big ones: Article 4 (organisations must ensure their staff have sufficient AI literacy) and Article 5 (prohibited practices) became enforceable.
Obligations for providers of general-purpose AI models (foundation model providers) start applying.
High-risk system rules become fully enforceable for systems placed on the market after this date.
High-risk systems already on the EU market when the Act took effect must also comply by this date.
A six-step plan that gets you to defensible compliance for most operational businesses. Sequence matters: do them in this order.
List every tool, plugin, model, agent, and embedded AI feature your team touches. Include the obvious (ChatGPT) and the ones that hide behind features (smart search in your CRM, autocomplete in your IDE, M365 Copilot).
Most operational tools fall into limited or minimal risk. A small number trigger high-risk obligations (e.g. AI in recruitment). The classification determines what compliance actually looks like.
Article 4 isn't theoretical. Staff need 'sufficient AI literacy' for their role. Training must be specific enough to defend in an audit. Per-learner verifiable certificates make this defensible.
If you run a chatbot, deploy AI-generated content, or use emotion recognition where allowed, you must disclose to users. A small notice + clear labelling usually does it.
Internal policy that names which tools are sanctioned, what data can flow into them, and who has approval rights. The policy is itself audit evidence.
Per-learner certificates with timestamps. Acknowledged-policy receipts. Tool inventory with risk tiers. Incident log if something goes wrong. This is what a regulator asks for.
Compliance is the floor, not the ceiling. The same staff literacy you build for Article 4 also lets your team start automating the workflows that drain operational time: document comparison, customer communication drafting, internal reporting. We call that Operational AI and we have a 12-month programme that builds it inside your business with your own team.
The cleanest sequence we’ve seen: ship literacy training first (Article 4 evidence + a baseline of what your people understand), then run an Operational AI build that converts that literacy into deployed workflows. Compliance pays for itself only when capability follows.
Per-learner courses with verifiable certificates from €39. The cleanest defensible evidence we know how to produce.