EU AI Act

Find out your exact EU AI Act obligations in 2 minutes.

Global founders selling into the EU need clarity fast. Get your risk category and next compliance steps without guesswork.

Informational guidance only. Not legal advice.

What AI Act risk classification means

EU AI Act risk classification is the process of determining whether an AI system is prohibited, high risk, limited risk, or minimal risk based on how it is used and who it affects.

For builders, this matters because risk level directly changes compliance work: documentation, transparency, human oversight, post-market monitoring, and go-to-market requirements.

A quick classification early in product design helps teams avoid costly rework and ship responsibly in the EU market.

EU AI Act risk categories

Use this side-by-side view to compare category definitions, example systems, and likely compliance expectations.

Prohibited Risk

Definition
AI uses that are considered unacceptable in the EU because they create clear harm to rights, safety, or democratic values.
Example systems
Social scoring, manipulative systems exploiting vulnerabilities, certain real-time remote biometric identification use cases.
Compliance burden
Not allowed in the EU market.
Likely obligations
No route to compliance for prohibited practices; redesign is required.

High Risk

Definition
AI systems used in sensitive contexts where outcomes can significantly affect people.
Example systems
Hiring and worker management tools, education assessment, access to essential services, healthcare or critical infrastructure support systems.
Compliance burden
High compliance burden before and during deployment.
Likely obligations
Likely requires technical documentation, risk management, human oversight, monitoring, and post-market controls.

Limited Risk

Definition
AI systems that are generally allowed but require transparency toward users.
Example systems
Chatbots, synthetic media generation, AI assistants where users should know they are interacting with AI.
Compliance burden
Moderate compliance burden focused on transparency.
Likely obligations
Likely requires AI disclosure, labeling of generated content, and clear user-facing notices.

Minimal Risk

Definition
AI systems with low impact on rights and safety in typical product usage.
Example systems
Spam filtering, basic recommendation support, low-impact productivity features.
Compliance burden
Low mandatory burden under AI Act risk framework.
Likely obligations
Usually no specific AI Act duties, but general obligations (for example GDPR) can still apply.

How to classify an AI system under the EU AI Act

Start with this lightweight product-style demo. It gives a preliminary classification before running a full scan.

Interactive AI Act Classifier

Answer a few quick questions to get a preliminary AI Act risk classification.

Used on the public report page and social preview card.

1. Does the AI make decisions affecting people's rights or access to services?
2. Is it used in hiring, education, law enforcement, critical infrastructure, or healthcare?
3. Does it generate synthetic media, chatbot responses, or AI-generated content shown to users?
4. Does it use biometric identification or process sensitive personal data in decision-making?
5. Is it only a low-impact assistant or productivity tool with no material rights impact?

Progress: 0/5 questions answered.

Complete all questions to unlock a reliable preliminary classification.

AI Act risk classification FAQ

What is AI Act risk classification?

AI Act risk classification is the EU framework that groups AI systems into prohibited, high-risk, limited-risk, or minimal-risk categories based on impact and context of use.

What is considered high-risk AI under the EU AI Act?

High-risk AI typically includes systems used in areas like hiring, education, law enforcement, critical infrastructure, and healthcare where decisions can materially affect people.

Are chatbots high-risk AI?

Most chatbots are usually treated as limited risk and must meet transparency duties, such as informing users they are interacting with AI. Risk can increase depending on use case.

What are prohibited AI practices?

Prohibited AI practices are uses considered unacceptable under the EU AI Act, such as certain manipulative or exploitative systems and specific biometric surveillance use cases.

Do startups need to comply with the EU AI Act?

Yes. Startup status does not remove AI Act obligations. If your product is offered in the EU or affects people in the EU, you should assess classification and related duties.

Can a low-risk AI system still have obligations?

Yes. Even if your AI is likely minimal risk under the AI Act, you can still have obligations under laws such as GDPR, consumer protection, and sector-specific rules.

Is this tool legal advice?

No. This page and tool provide informational guidance and a preliminary assessment only. They do not replace advice from qualified legal professionals.

Keep learning

Explore deeper compliance guidance and practical implementation playbooks.

Ready to assess your product?

Move from preliminary answers to a structured compliance report in minutes.