Is your AI tool high-risk? How to classify AI under the EU AI Act
Find out if your AI tools are high-risk under the EU AI Act. Annex III categories explained with real examples -- ChatGPT, Copilot, Salesforce, and more.
The number one question companies ask about the EU AI Act is whether their AI tools are high-risk. The answer depends not on the tool itself but on how you use it. The regulation defines four risk tiers -- prohibited, high-risk, limited, and minimal -- and most standard business tools fall into the minimal or limited categories. This guide walks you through each tier, the eight Annex III high-risk categories, specific tool classifications, and the Article 6(3) exception that could save you significant compliance costs.
It is the use case, not the tool. ChatGPT drafting blog posts is minimal risk. The same ChatGPT screening CVs is high-risk. Classification follows application, not product name.
Most business AI tools are minimal risk. Spam filters, autocomplete, recommendation engines, and content generators carry no specific AI Act obligations beyond Article 4 AI literacy.
Annex III lists eight high-risk categories. Biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration, and justice. If your AI operates in any of these domains, it is likely high-risk.
Article 6(3) is an escape hatch. Even an Annex III system is not high-risk if it does not materially influence decision outcomes. Document your reasoning if you rely on this exception.
The four risk tiers explained
The EU AI Act classifies every AI system into one of four risk tiers. Your tier determines your obligations, your compliance costs, and your timeline. Getting this classification right is the single most consequential step in AI Act compliance -- it shapes everything that follows.
The framework is deliberately proportionate. A spam filter and a system that decides who gets a mortgage carry fundamentally different risks for people. The regulation reflects that. Here is how the four tiers break down.
Prohibited (Article 5)
These AI practices are outlawed entirely. The prohibitions have been in force since 2 February 2025. If you deploy any of these systems, you face fines of up to EUR 35 million or 7% of global annual turnover.
Social scoring by public authorities. Systems that evaluate or classify natural persons based on their social behaviour or personal characteristics, leading to detrimental treatment disproportionate to the context.
Real-time remote biometric identification in public spaces. Using facial recognition or other biometric systems in publicly accessible spaces for law enforcement purposes, with narrow exceptions for missing children, imminent terrorist threats, and serious criminal suspects.
Emotion recognition in workplaces and schools. Systems that infer emotions of natural persons in the workplace or in educational institutions, except for medical or safety reasons.
Subliminal manipulation. AI that deploys subliminal techniques beyond a person's consciousness, or exploits vulnerabilities due to age, disability, or social or economic situation, to materially distort behaviour in ways likely to cause significant harm.
Untargeted facial image scraping. Building facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage.
Biometric categorisation for sensitive attributes. Systems that categorise natural persons based on biometric data to infer race, political opinions, trade union membership, religious beliefs, sex life, or sexual orientation.
These prohibitions are not future obligations. They have applied since February 2025. If any of your AI systems fall into these categories, you are already in breach.
High-risk (Article 6 + Annex III)
AI systems subject to the strictest compliance requirements. These are systems that make or materially influence decisions about people in sensitive domains. The full list of domains is set out in Annex III and covers eight broad categories (detailed in the next section). High-risk system obligations for deployers take effect from 2 August 2026.
If your system is high-risk, you face conformity assessments, fundamental rights impact assessments, mandatory human oversight, technical documentation requirements, record-keeping obligations, and registration in the EU database. This is where the cost and complexity live.
Limited risk (Article 50)
Systems with transparency obligations only. No conformity assessment, no fundamental rights impact assessment -- just disclosure. The core requirement is that people must know they are interacting with or receiving content from AI.
Chatbots. Must disclose to users that they are interacting with an AI system (Article 50(1)).
Deepfakes and synthetic content. AI-generated or manipulated images, audio, or video must be labelled as artificially generated (Article 50(4)).
Emotion recognition systems. When used outside prohibited contexts, must inform the natural persons exposed to the system (Article 50(3)).
Minimal risk
Everything else. The vast majority of AI tools businesses use today fall here. Spam filters, autocomplete, recommendation engines for non-essential services, predictive text, AI-assisted spell checking, content suggestion algorithms. No specific obligations under the AI Act beyond the Article 4 AI literacy requirement that applies to all AI system operators.
Voluntary codes of practice may apply (Article 95), but they are not mandatory. If your entire AI inventory consists of minimal-risk tools, your compliance burden is light: ensure staff have appropriate AI literacy training and keep a basic inventory.
The Annex III categories: eight domains of high-risk AI
Annex III is the heart of the high-risk classification. It lists eight categories of AI systems that the European legislator considers sufficiently consequential to warrant full regulatory oversight. If your AI system operates in any of these domains, it is presumed high-risk unless the Article 6(3) exception applies.
Each category below includes the official Annex III numbering, the scope, and real-world examples of systems that fall within it.
1. Biometrics (Annex III, point 1)
Scope. Remote biometric identification systems (not in real time), biometric categorisation systems, and emotion recognition systems not covered by the Article 5 prohibitions.
Examples. Facial recognition for building access control. Age verification using facial analysis. Voice recognition systems used to identify individuals. Gait analysis for security purposes.
Key distinction. Real-time biometric identification in public spaces is prohibited (Article 5). Non-real-time use, or use in non-public spaces, is high-risk rather than banned.
2. Critical infrastructure (Annex III, point 2)
Scope. AI systems used as safety components in the management and operation of critical digital infrastructure, road traffic, and the supply of water, gas, heating, and electricity.
Examples. AI managing energy grid load balancing. Automated water treatment control systems. Traffic management systems that control signal timing. AI monitoring pipeline pressure in gas distribution networks.
Key distinction. The system must be a safety component or directly manage the infrastructure. Using AI to forecast energy demand for planning purposes is different from using AI to control energy distribution in real time.
3. Education and vocational training (Annex III, point 3)
Scope. AI systems used to determine access to or admission to educational and vocational training institutions, to evaluate learning outcomes, to assess the appropriate level of education, and to monitor prohibited behaviour during examinations.
Examples. University admissions systems that rank or filter applicants. Automated exam grading. Proctoring software that uses AI to flag suspicious behaviour during online tests. Systems that determine student placement into educational tracks.
Key distinction. AI tools that assist teachers without determining outcomes (such as lesson planning assistants or content recommendation engines) are not high-risk.
4. Employment, workers management, and access to self-employment (Annex III, point 4)
Scope. AI systems used in recruitment, selection, placement of applicants, making decisions affecting employment relationships (promotion, termination, task allocation), and monitoring or evaluating worker performance and behaviour.
Examples. CV screening tools that rank or filter candidates. AI-powered video interview analysis (HireVue, Pymetrics). Automated performance scoring systems. Productivity monitoring tools that influence management decisions. Systems that recommend redundancies based on performance data.
Key distinction. This is the category most relevant to businesses. If any AI tool influences who gets hired, promoted, monitored, or dismissed, it is high-risk. The key word is "influence" -- even partial automation of these decisions triggers the classification.
Category 4 catches more companies than any other Annex III domain. If you use any AI in HR processes -- from recruitment to performance review -- check this classification carefully.
5. Access to and enjoyment of essential private services and public services and benefits (Annex III, point 5)
Scope. AI systems used to evaluate creditworthiness or establish credit scores, to evaluate eligibility for public assistance benefits and services, to assess insurance risk and pricing, and to evaluate and classify emergency calls.
Examples. Automated credit scoring for loan applications. Insurance premium calculators that use AI risk models. Systems determining eligibility for social security benefits. AI triage systems for emergency services (prioritising 112 calls).
Key distinction. The AI must evaluate or classify individuals for access to services. A recommendation engine suggesting financial products is not the same as one that decides whether you qualify for a loan.
6. Law enforcement (Annex III, point 6)
Scope. AI systems used by law enforcement for individual risk assessments, as polygraphs or similar tools, to evaluate evidence reliability, to predict criminal or re-offending risk, for profiling during detection and investigation, and for crime analytics regarding natural persons.
Examples. Predictive policing tools that identify likely crime locations or suspects. AI evidence analysis in criminal investigations. Risk assessment tools used in bail or parole decisions. Systems that profile individuals during border checks.
7. Migration, asylum, and border control management (Annex III, point 7)
Scope. AI systems used as polygraphs or similar tools, to assess risks (security, irregular immigration, or health), to assist in the examination of applications for asylum, visa, and residence permits, and for detecting, recognising, or identifying persons in the context of migration.
Examples. Automated visa application processing systems. AI risk profiling of travellers at border crossings. Facial recognition in border control automated gates. Systems assessing asylum applications.
8. Administration of justice and democratic processes (Annex III, point 8)
Scope. AI systems used by judicial authorities to research and interpret facts and law, to apply the law to facts, and to assist in alternative dispute resolution. Also covers systems intended to influence the outcome of elections or referendums.
Examples. AI-assisted sentencing recommendation tools. Case routing systems in courts. AI-generated legal analysis used in judicial decision-making. Automated dispute resolution platforms that determine outcomes.
AI systems used in employment, workers management and access to self-employment should also be classified as high-risk, since those systems may appreciably impact future career prospects, livelihoods of those persons and workers' rights.
EU AI Act, Recital 51, Regulation (EU) 2024/1689Is [specific tool] high-risk?
This is the question everyone actually Googles. Rather than abstract legal categories, you want to know: is the tool I use every day going to require a conformity assessment? The answer depends almost entirely on how you use it.
It is not the tool that is high-risk -- it is the use case. The same software can be minimal risk in one context and high-risk in another. Classify each distinct application of a tool separately.
When used for content generation, research, brainstorming, drafting, or summarisation, these are minimal-risk tools. No specific AI Act obligations beyond Article 4 AI literacy. However, if you use them to screen job applicants, evaluate creditworthiness, or make decisions about people in Annex III domains, the use case is high-risk. The GPAI provider (OpenAI, Anthropic) has separate obligations under Articles 51 to 56.
Used for drafting documents, summarising meetings, generating presentations, and email assistance, Copilot is minimal risk. It becomes high-risk only if integrated into workflows that evaluate employee performance or inform employment decisions. Standard productivity use carries no specific obligations.
Used for sales forecasting, lead scoring, and customer insights, Einstein is minimal risk. It becomes high-risk if used for credit decisions, insurance pricing, or determining eligibility for essential services. Deployers must assess each use case against Annex III categories.
AI-powered video interview analysis, candidate scoring, and automated CV screening are high-risk under Annex III, Category 4 (employment). These tools directly influence hiring decisions about natural persons. Full conformity assessment, fundamental rights impact assessment, human oversight, and EU database registration are required.
Spell checking, grammar correction, and writing style suggestions are minimal risk. Grammarly does not make or influence decisions about people in any Annex III domain. Article 4 AI literacy applies to staff who use it, and nothing more.
Content generation, note summarisation, and writing assistance tools are minimal risk when used for their intended purpose. No specific AI Act obligations. If repurposed for employee evaluation or similar Annex III use cases, classification changes accordingly.
Depends on implementation. Basic transaction flagging for human review is limited risk (transparency obligation). Systems that automatically block accounts, deny transactions, or determine fraud without human intervention can be high-risk under Annex III, Category 5 (essential services) if they affect access to financial services.
The Article 6(3) exception: when Annex III systems are not high-risk
This is the clause most companies do not know about, and it can save you significant compliance costs. Article 6(3) establishes that even if an AI system is listed in Annex III, it is not classified as high-risk if it does not pose a significant risk of harm to the health, safety, or fundamental rights of natural persons.
Specifically, an Annex III system is not high-risk if it meets one of these conditions:
It performs a narrow procedural task. The AI system performs a task that is purely procedural, such as converting unstructured data into structured data, classifying incoming documents into categories, or detecting duplicates among existing entries.
It improves the result of a completed human activity. The AI system is intended to improve the result of a previously completed human activity -- for example, a grammar check on a human-written legal document or a calculation verification on a human-prepared financial statement.
It detects decision-making patterns without replacing human assessment. The AI system is intended to detect decision-making patterns or deviations from prior patterns and is not meant to replace or influence the previously completed human assessment without proper human review.
It performs a preparatory task. The AI system is intended to perform a preparatory task for an assessment relevant to the use cases listed in Annex III.
If you rely on Article 6(3) to classify an Annex III system as not high-risk, you must document your reasoning before deploying the system. Article 6(4) requires that deployers who apply this exception document their assessment and make it available to national competent authorities upon request. This is not optional.
The exception does not apply if the AI system performs profiling of natural persons (Article 6(3), final paragraph). If your system profiles people -- creating or inferring characteristics, preferences, behaviours, or attributes -- the exception is unavailable regardless of how narrow the task seems.
What high-risk means for your business
If you have determined that one or more of your AI systems is high-risk, here is what you are required to do. These obligations fall primarily on providers, but deployers have their own substantial requirements under Articles 26 and 27.
Provider obligations (building or supplying the AI system)
Conformity assessment (Article 43). Before placing a high-risk AI system on the market, the provider must demonstrate conformity with the requirements of Articles 8 to 15. For most Annex III systems, this is a self-assessment. For certain biometric systems, an independent notified body is required.
Risk management system (Article 9). A continuous, iterative process throughout the AI system's lifecycle. Identify and analyse known and foreseeable risks, estimate and evaluate risks, adopt appropriate risk management measures, and test to ensure residual risks are acceptable.
Data governance (Article 10). Training, validation, and testing data sets must be subject to governance and management practices covering design choices, data collection, preparation, relevance, completeness, and bias examination.
Technical documentation (Article 11). Comprehensive documentation drawn up before the system is placed on the market and kept up to date. Must enable authorities to assess compliance.
Record-keeping (Article 12). High-risk AI systems must allow for automatic recording of events (logs) relevant to identifying risks, facilitating post-market monitoring, and monitoring operation.
Human oversight (Article 14). High-risk AI systems must be designed and developed so they can be effectively overseen by natural persons during their period of use. Oversight measures must be commensurate with the risks.
Registration in the EU database (Article 49). Before placing a high-risk AI system on the market or putting it into service, the provider must register the system in the EU database established under Article 71.
Deployer obligations (using the AI system in your business)
Fundamental rights impact assessment (Article 27). Before deploying a high-risk AI system, you must assess its impact on the fundamental rights of affected persons. This includes identifying affected groups, specific risks, human oversight measures, and planned mitigation steps.
Use in accordance with instructions (Article 26(1)). Deploy the system in accordance with the instructions of use provided by the provider. This sounds simple but carries real weight: deviating from intended use can shift you from deployer to provider.
Human oversight (Article 26(2)). Assign human oversight to natural persons who have the necessary competence, training, and authority. Ensure they can properly understand the system's capacities and limitations.
Input data quality (Article 26(4)). Ensure that input data is relevant and sufficiently representative for the intended purpose of the high-risk AI system.
Monitoring and incident reporting (Article 26(5)). Monitor the operation of the high-risk AI system and report serious incidents to the provider and the relevant market surveillance authority.
Transparency (Article 26(7)). Inform natural persons that they are subject to the use of a high-risk AI system, unless this is obvious from the circumstances.
For a practical step-by-step walkthrough of building a compliance programme, see our EU AI Act compliance checklist. For guidance on the Article 4 AI literacy training requirement that applies to all tiers, see our workforce training guide.
Decision flowchart: classify your AI system
Work through each question to determine the risk classification of a specific AI system. Click a path to proceed.
Decision tree summary
Frequently asked questions
Sources and further reading
Official EU institutional sources. We recommend consulting them directly for the most current guidance.
Our team can audit your AI stack
Book a free call to classify your AI systems, identify high-risk use cases, and build a compliance roadmap tailored to your organisation.


