Search pages, courses, and articles
Classify any AI system in three questions. Walks the same decision tree the regulation lays out: Article 5 prohibited practices, then Annex III high-risk domains, then Article 50 transparency obligations. Returns the verdict, the underlying Article, and the specific obligations that apply.
Source: Regulation (EU) 2024/1689 on EUR-Lex. Self-assessment, not legal advice.
Article 5 of Regulation (EU) 2024/1689 bans a small but specific set of AI practices in the EU. The list includes social scoring by public authorities, manipulation of vulnerable persons, real-time remote biometric identification in public spaces (with narrow law-enforcement exceptions), emotion recognition in workplaces or educational institutions, untargeted scraping of facial images, and biometric categorisation inferring race, political opinions, trade-union membership, religion, sex life, or sexual orientation.
Regulation (EU) 2024/1689 takes a risk-based approach. Obligations scale with the potential for harm.
Social scoring, manipulation, real-time biometric ID, emotion recognition at work. Fines up to EUR 35M or 7% turnover.
Conformity assessment, FRIA, technical documentation, human oversight, post-market monitoring. Fines up to EUR 15M or 3% turnover.
Disclose AI interaction. Label AI-generated content. Inform subjects of emotion recognition. No conformity assessment.
Article 4 AI literacy training for staff. No specific obligations beyond that. Spam filters, spell check, recommendation engines.
Common tools, classified.
No Annex III domain, no decisions about people. Article 4 AI literacy applies; nothing else.
Employment is Annex III, Category 4. Conformity assessment, FRIA, human oversight, technical documentation all required.
Interacts with people, so Article 50 transparency applies. Disclose AI to users at the start of the conversation.
Emotion recognition in workplaces is explicitly banned under Article 5(1)(f). Stop deployment.
Access to essential services is Annex III, Category 5. Full high-risk obligations apply.
ChatGPT itself is a general-purpose AI (GPAI) model under Article 51, not a high-risk system. But how you DEPLOY ChatGPT decides its tier under the AI Act. Using ChatGPT for content generation or research is minimal risk. Using it to screen job applicants, score credit, or influence employment decisions moves you into Annex III high-risk territory because the USE CASE sits in an Annex III domain (employment, essential services). The tool is not high-risk; your application of it can be.
Annex III to Regulation (EU) 2024/1689 lists eight categories of high-risk AI use: biometric identification or categorisation, critical infrastructure (water, gas, electricity, traffic), education and vocational training, employment (recruitment, promotion, termination, performance evaluation), access to essential services (credit scoring, insurance pricing, public benefits), law enforcement, migration and border control, and administration of justice and democratic processes. If an AI system materially influences decisions about people in any of these domains, Article 6 classifies it as high-risk.
High-risk AI (Article 6 plus Annex III) carries the heaviest deployer obligations short of prohibition: conformity assessment, technical documentation, human oversight, post-market monitoring, fundamental rights impact assessment (Article 27), and Article 4 AI literacy. Limited risk (Article 50) carries transparency obligations only: users must know they are interacting with AI, AI-generated content must be labelled, emotion recognition or biometric categorisation must be disclosed to subjects. No conformity assessment, no FRIA. The difference is multiple orders of magnitude in compliance effort.
Article 5 of the EU AI Act lists specific AI practices that are banned in the EU. The list covers manipulation of vulnerable persons, social scoring by public authorities, real-time remote biometric identification in public spaces (with narrow law-enforcement exceptions), emotion recognition in workplaces or educational institutions, untargeted scraping of facial images to build recognition databases, and biometric categorisation inferring sensitive characteristics like race or political opinions. Deploying a prohibited practice carries the highest fine tier under Article 99: up to EUR 35 million or 7% of global annual turnover.
Under-classification (treating a high-risk system as minimal risk) is the biggest exposure. If a national market surveillance authority audits and finds the classification was wrong, you face Article 99 fines up to EUR 15 million or 3% of global turnover for the high-risk obligations you skipped, plus civil liability if harm has occurred. Over-classification (treating a minimal-risk system as high-risk) wastes resources but is not a regulatory risk. When in doubt, classify higher and consult a lawyer in the relevant Member State.
Not yet. GPAI providers (developers of general-purpose AI models like GPT, Claude, Gemini) fall under Article 51 and Article 101, a separate regime from deployer obligations. This classifier focuses on deployer classification under Article 5, Article 6, Annex III, and Article 50, which covers the vast majority of businesses using AI. If you are a GPAI provider, the obligations are different: technical documentation, training-data summaries, copyright compliance, and (for systemic risk models) additional reporting under Article 53. Use the deeper-dive blog post linked below for that path.
The classifier gives you the tier. The article walks through Annex III categories with worked examples, ChatGPT vs Copilot vs Salesforce, the boundary cases that trip compliance teams up, and the documentation trail you need for an audit.