---
title: "EU AI Act Requirements"
canonical_url: "https://www.sorena.io/artifacts/eu/ai-act/requirements"
source_url: "https://www.sorena.io/artifacts/eu/artificial-intelligence-act/requirements"
author: "Sorena AI"
description: "Get a grounded overview of EU AI Act requirements across Article 5 prohibited practices, Article 6 and Annex III high risk systems."
published_at: "2026-03-04"
updated_at: "2026-03-04"
keywords:
  - "EU AI Act requirements"
  - "AI Act obligations overview"
  - "Article 5 Article 6 Article 50 Chapter V"
  - "high risk requirements"
  - "GPAI requirements"
  - "transparency requirements"
  - "EU AI Act"
  - "Requirements"
  - "Article 5"
  - "Article 50"
  - "Chapter V"
---
**[SORENA](https://www.sorena.io/)** - AI-Powered GRC Platform

[Home](https://www.sorena.io/) | [Solutions](https://www.sorena.io/solutions) | [Artifacts](https://www.sorena.io/artifacts) | [About Us](https://www.sorena.io/about-us) | [Contact](https://www.sorena.io/contact) | [Portal](https://app.sorena.io)

---

# EU AI Act Requirements

Get a grounded overview of EU AI Act requirements across Article 5 prohibited practices, Article 6 and Annex III high risk systems.

*EU AI Act* *Overview*

## EU AI Act (Regulation (EU) 2024/1689) Requirements overview

The Act routes systems and models into obligation sets. Compliance starts with correct routing.

This page summarizes the four main requirement tracks and the evidence each one drives.

A good AI Act implementation begins with a simple question: which requirement track applies to this system or model? The Regulation is not one flat checklist. It imposes different obligations depending on whether the system is prohibited, high risk, subject to transparency duties, or connected to GPAI provider obligations.

## Track one: prohibited practices

If Article 5 applies, the correct operational outcome is prevention. These are not use cases you mitigate into compliance through documentation alone. Product and procurement teams need an active gate that screens for manipulation, exploitation of vulnerability, social scoring, prohibited biometric practices, and other listed categories.

Because Article 5 infringements carry the highest penalties, this track should be visible at executive level.

- Key output: screening record and stop or redesign decision.
- Key owners: product, legal, risk, security.
- Key evidence: use case rationale, user journey, data and biometric review.
- Key review timing: intake, procurement, and material changes.

## Track two: high risk systems

High risk obligations apply through the product safety route in Article 6(1) or the Annex III route in Article 6(2). Once a system is high risk, the provider and deployer evidence standard increases sharply. Articles 9 to 15 set the technical and organizational core, but the surrounding duties on conformity, registration, monitoring, retention, and user information matter just as much.

This track is where most enterprise implementation work sits because it touches architecture, data governance, testing, human oversight, and post market operations.

- Key output: Annex IV aligned technical documentation and control evidence.
- Key owners: provider, deployer, engineering, risk, quality, operations.
- Key evidence: risk management, data governance, logging, oversight, accuracy, cybersecurity, FRIA where applicable.
- Key review timing: design, pre release, and post market monitoring.

## Track three: transparency duties

Article 50 creates obligations where natural persons interact directly with AI systems, where synthetic outputs must be machine readable and detectable, where emotion recognition or biometric categorisation systems are used, and where deepfakes or certain AI generated text are deployed. This work belongs inside product, design, accessibility, editorial, and QA processes.

Transparency is not solved by a generic footer. The notice has to appear in the right interface, at the right moment, and often with technical marking or logging behind it.

- Key output: disclosure and marking design across product surfaces.
- Key owners: product, design, content, engineering, QA.
- Key evidence: copy library, screenshots, machine readable marking logic, and display logs.
- Key review timing: feature design, release, and model or content pipeline changes.

## Track four: GPAI provider obligations

Chapter V applies to providers of general purpose AI models. It requires model level technical documentation, downstream documentation, a copyright policy, and a public summary of training content. Systemic risk adds compute threshold monitoring, notification, mitigation, and serious incident response expectations.

Even organizations that are not GPAI providers need to understand this track because they often depend on upstream model suppliers for evidence and contractual support.

- Key output: Article 53 to 55 artifact set and supplier evidence model.
- Key owners: model provider, legal, security, documentation, operations.
- Key evidence: training content summary, copyright policy, technical documentation, notification and incident records.
- Key review timing: before market placement, on major model changes, and during systemic risk monitoring.

*Recommended next step*

*Placement: after the requirement breakdown*

## Turn EU AI Act (Regulation (EU) 2024/1689) Requirements overview into an operational assessment

Assessment Autopilot can take EU AI Act (Regulation (EU) 2024/1689) Requirements overview from turning the requirements into assigned actions to a reusable workflow inside Sorena. Teams working on EU AI Act (Regulation (EU) 2024/1689) can keep owners, evidence, and next steps aligned without copying this guide into separate documents.

- [Open Assessment Autopilot for EU AI Act (Regulation (EU) 2024/1689) Requirements overview](/solutions/assessment.md): Start from EU AI Act (Regulation (EU) 2024/1689) Requirements overview and turn the guidance into owned tasks, evidence requests, and review checkpoints.
- [Talk through EU AI Act (Regulation (EU) 2024/1689)](/contact.md): Review your current process, evidence gaps, and next steps for EU AI Act (Regulation (EU) 2024/1689) Requirements overview.

## Primary sources

- [Regulation (EU) 2024/1689 (EU AI Act) - Official Journal](https://eur-lex.europa.eu/eli/reg/2024/1689/oj?ref=sorena.io) - Primary legal text for scope, obligations, annexes, transition rules, and penalties.
- [AI Act Service Desk - Annex III high risk use cases](https://ai-act-service-desk.ec.europa.eu/en/ai-act/annex-3?ref=sorena.io) - Official annex landing page for high risk use cases.
- [AI Act Service Desk - Article 50 transparency obligations](https://ai-act-service-desk.ec.europa.eu/en/ai-act/article-50?ref=sorena.io) - Official support page for interaction, disclosure, and deepfake duties.
- [European Commission - Guidelines for providers of general purpose AI models](https://digital-strategy.ec.europa.eu/en/policies/guidelines-gpai-providers?ref=sorena.io) - Official guidance on Chapter V scope, timelines, and AI Office expectations.

## Related Topic Guides

- [EU AI Act Applicability and Roles | Provider, Deployer, Importer Guide](/artifacts/eu/artificial-intelligence-act/applicability-and-roles.md): Determine whether the EU AI Act applies, when output used in the Union brings a system into scope, and how to assign provider, deployer, importer.
- [EU AI Act Applicability Test | Scope, Role, and Obligation Routing](/artifacts/eu/artificial-intelligence-act/applicability-test.md): Run a practical EU AI Act applicability test that checks scope, exclusions, operator role, prohibited practices, high risk status, transparency triggers.
- [EU AI Act Checklist | Practical Compliance Checklist by Obligation](/artifacts/eu/artificial-intelligence-act/checklist.md): Use a detailed EU AI Act checklist covering inventory, role mapping, Article 5 screening, high risk controls, Article 50 disclosures, GPAI evidence, logging.
- [EU AI Act Compliance Program | Build an Operational AI Act Program](/artifacts/eu/artificial-intelligence-act/compliance.md): Build an EU AI Act compliance program that covers inventory, governance, AI literacy, prohibited practice gates, high risk controls, Article 50 product work.
- [EU AI Act Deadlines and Compliance Calendar | Exact Dates and Workplan](/artifacts/eu/artificial-intelligence-act/deadlines-and-compliance-calendar.md): Track the exact EU AI Act dates, including entry into force on 1 August 2024, early obligations from 2 February 2025, GPAI obligations from 2 August 2025.
- [EU AI Act FAQ | Dates, High Risk, GPAI, Transparency, and Penalties](/artifacts/eu/artificial-intelligence-act/faq.md): Get grounded answers to common EU AI Act questions on application dates, high risk status, provider versus deployer roles, transparency.
- [EU AI Act GPAI and Foundation Model Obligations | Chapter V Guide](/artifacts/eu/artificial-intelligence-act/gpai-and-foundation-model-obligations.md): Understand EU AI Act obligations for general purpose AI model providers, including Article 53 documentation, copyright policy.
- [EU AI Act High Risk AI Use Cases by Industry | Annex III and Product Routes](/artifacts/eu/artificial-intelligence-act/high-risk-ai-use-cases-by-industry.md): See how EU AI Act high risk status appears across biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration.
- [EU AI Act High Risk Requirements Checklist | Articles 9 to 15 and Beyond](/artifacts/eu/artificial-intelligence-act/high-risk-requirements-checklist.md): Use a detailed high risk AI checklist covering Article 9 risk management, Article 10 data governance, Annex IV technical documentation, logging, instructions.
- [EU AI Act Penalties and Fines | Article 99 and GPAI Fine Exposure](/artifacts/eu/artificial-intelligence-act/penalties-and-fines.md): Understand EU AI Act penalty tiers, including Article 5 fines up to EUR 35,000,000 or 7 percent.
- [EU AI Act Prohibited AI Practices | Article 5 Screening Guide](/artifacts/eu/artificial-intelligence-act/prohibited-ai-practices.md): Screen AI systems against EU AI Act Article 5 prohibited practices, including manipulative and deceptive techniques, exploitation of vulnerabilities.
- [EU AI Act Timeline and Phasing Roadmap | Practical Implementation Roadmap](/artifacts/eu/artificial-intelligence-act/timeline-and-phasing-roadmap.md): Follow a practical EU AI Act roadmap that aligns workstreams to the phased application dates for prohibited practices, AI literacy, GPAI obligations.
- [EU AI Act Transparency, Labeling, and User Disclosures | Article 50 Guide](/artifacts/eu/artificial-intelligence-act/transparency-labeling-and-user-disclosures.md): Implement EU AI Act Article 50 transparency duties for direct interaction notices, machine readable marking of synthetic outputs, deepfake disclosures.
- [EU AI Act vs ISO 42001 | What ISO 42001 Covers and What It Does Not](/artifacts/eu/artificial-intelligence-act/eu-ai-act-vs-iso-42001.md): Compare the EU AI Act with ISO/IEC 42001:2023. Learn where ISO 42001 helps with AI policy, roles, risk assessment, impact assessment, documented information.
- [EU AI Act vs NIST AI RMF | How to Use AI RMF Without Missing AI Act Duties](/artifacts/eu/artificial-intelligence-act/eu-ai-act-vs-nist-ai-rmf.md): Compare the EU AI Act with NIST AI RMF 1.0. Learn how the voluntary NIST AI RMF functions Govern, Map, Measure.


---

[Privacy Policy](https://www.sorena.io/privacy) | [Terms of Use](https://www.sorena.io/terms-of-use) | [DMCA](https://www.sorena.io/dmca) | [About Us](https://www.sorena.io/about-us)

(c) 2026 Sorena AB (559573-7338). All rights reserved.

Source: https://www.sorena.io/artifacts/eu/artificial-intelligence-act/requirements
