EU AI ActFree Resource

EU AI Act Compliance Decision Map

A fast way to answer the AI Act questions that block execution. Confirm whether the AI Act applies, whether your scenario is prohibited or high risk, and which obligations to implement next - including general-purpose AI model duties and transparency requirements.

This map is a snapshot. Guidance, standards, and delegated acts will evolve. Our Copilot keeps it current and tailored to your context. Our Autopilot evaluates where you stand and acts on the gaps.

Build my AI Act plan
What you can decide faster
Scope and prohibitions
Confirm Art. 2 applicability, exclusions, and the 8 prohibited AI practices.
Risk classification
Annex I safety products, Annex III fundamental rights, and limited-risk transparency.
GPAI obligations
General-purpose AI model provider duties, systemic risk controls, and codes of practice.
By Sorena AIUpdated Feb 2026No sign-up required
Risk classification (AI Act)
Reg. 2024/1689
Art. 5
Prohibited practices
8 banned use-cases; apply from 2 Feb 2025.
Art. 6
High-risk AI
Annex I safety products + Annex III fundamental rights domains.
Art. 50
Transparency
Deep fakes, emotion detection, AI-generated content labelling.
Use the map to connect scope, classification, and obligations to your products.
2 Feb 2025Prohibitions
Annex III8 domains
Art. 53GPAI duties
2 Aug 2026Full application
High-risk vs limited risk
GPAI systemic risk
AI Act Timeline

Key dates for AI Act implementation

Use this to plan what becomes applicable when, including high-risk timelines, general-purpose AI model obligations, and governance milestones.

Loading timeline...
AI Act Decision Map

Decide faster what the AI Act means for your product and role

Trace your path from scope to an actionable outcome: prohibited practice, high-risk provider/deployer program, transparency duties, or general-purpose AI model provider obligations.

EU AI Act compliance decision map by Sorena AI
Go further

Turn the AI Act into an evidence-ready program

This decision map is a strong baseline. Sorena Research Copilot can tailor it to your AI systems, risk categories, and roles, then generate deliverables your team can actually run.

  • Confirm scope and classify each AI system against Annex I and Annex III
  • Map high-risk provider and deployer obligations to owners and evidence
  • Operationalise transparency duties for deep fakes, emotion detection, and AI-generated content
  • Plan GPAI model provider compliance, systemic risk assessments, and code-of-practice alignment
Customize with Copilot
Tailor scope, duties, and evidence to your organisation.
Talk to an expert
Get help with classification, evidence, and implementation.