---
title: "EU AI Act Timeline, Decision Flow, and Compliance Guides"
canonical_url: "https://www.sorena.io/artifacts/eu/ai-act"
source_url: "https://www.sorena.io/artifacts/eu/artificial-intelligence-act"
author: "Sorena AI"
description: "Use this EU AI Act hub to work from the actual phased dates in Regulation (EU) 2024/1689, classify prohibited practices, high risk systems."
published_at: "2026-03-04"
updated_at: "2026-03-04"
keywords:
  - "EU AI Act"
  - "Regulation (EU) 2024/1689"
  - "EU AI Act timeline"
  - "EU AI Act decision flow"
  - "prohibited AI practices"
  - "Article 5 AI Act"
  - "high risk AI Annex III"
  - "Annex IV technical documentation"
  - "Article 50 transparency"
  - "EU AI Act GPAI"
  - "Article 53 training content summary"
  - "Article 55 serious incidents"
  - "EU AI Act penalties"
  - "AI Office"
  - "AI Act implementation guides"
  - "Article 5"
  - "Annex III"
  - "Article 50"
  - "General purpose AI"
---
**[SORENA](https://www.sorena.io/)** - AI-Powered GRC Platform

[Home](https://www.sorena.io/) | [Solutions](https://www.sorena.io/solutions) | [Artifacts](https://www.sorena.io/artifacts) | [About Us](https://www.sorena.io/about-us) | [Contact](https://www.sorena.io/contact) | [Portal](https://app.sorena.io)

---

# EU AI Act Timeline, Decision Flow, and Compliance Guides

Use this EU AI Act hub to work from the actual phased dates in Regulation (EU) 2024/1689, classify prohibited practices, high risk systems.

![EU AI Act artifact preview](https://cdn.sorena.io/cdn-cgi/image/format=auto/cheatsheets/prod/sorena-ai-eu-ai-act-timeline-small.jpg?v=cheatsheets%2Fprod)

*EU AI Act* *Free Resource*

## EU AI Act Timeline, Decision Flow & Guides

Use this artifact to classify whether your AI system or model is prohibited, high risk, transparency triggered, or covered by GPAI duties, then turn that answer into a practical compliance plan grounded in Regulation (EU) 2024/1689.

The phased dates matter: the Regulation entered into force on 1 August 2024, prohibited practices and AI literacy apply from 2 February 2025, GPAI obligations apply from 2 August 2025, and most remaining rules apply from 2 August 2026, with later transition dates for certain cases.

[Get an AI Act readiness review](/contact.md)

## What you can decide faster

- **Scope and role**: Decide whether the output used in the Union brings the system into scope and who is provider, deployer, importer, or authorised representative.
- **Risk and obligations**: Separate Article 5 stop issues from Annex III high risk work, Article 50 disclosures, and GPAI provider duties.
- **Evidence path**: Move from classification to documentation, release gates, supplier asks, and post market monitoring.

By Sorena AI | Updated 2026 | No signup required

### Quick scan

*AI Act*

- **Phased dates**: Track the 2025, 2026, 2027, and 2030 milestones that change what must be operational.
- **Decision flow**: Classify prohibited practices, high risk status, transparency duties, and GPAI exposure.
- **Implementation guides**: Use page specific guidance for applicability, high risk evidence, transparency, GPAI, and penalties.

Use the decision flow first, then use the linked guides to assign owners, evidence, and review dates.

| Value | Metric |
| --- | --- |
| 1 Aug 2024 | Entered force |
| 2 Feb 2025 | Early duties |
| 2 Aug 2025 | GPAI live |
| 2 Aug 2026 | Main date |

**Key highlights:** Classify scope | Check Article 5 | Plan evidence

## Topic Guides

- [EU AI Act Applicability and Roles | Provider, Deployer, Importer Guide](/artifacts/eu/artificial-intelligence-act/applicability-and-roles.md): Determine whether the EU AI Act applies, when output used in the Union brings a system into scope, and how to assign provider, deployer, importer.
- [EU AI Act Applicability Test | Scope, Role, and Obligation Routing](/artifacts/eu/artificial-intelligence-act/applicability-test.md): Run a practical EU AI Act applicability test that checks scope, exclusions, operator role, prohibited practices, high risk status, transparency triggers.
- [EU AI Act Checklist | Practical Compliance Checklist by Obligation](/artifacts/eu/artificial-intelligence-act/checklist.md): Use a detailed EU AI Act checklist covering inventory, role mapping, Article 5 screening, high risk controls, Article 50 disclosures, GPAI evidence, logging.
- [EU AI Act Compliance Program | Build an Operational AI Act Program](/artifacts/eu/artificial-intelligence-act/compliance.md): Build an EU AI Act compliance program that covers inventory, governance, AI literacy, prohibited practice gates, high risk controls, Article 50 product work.
- [EU AI Act Deadlines and Compliance Calendar | Exact Dates and Workplan](/artifacts/eu/artificial-intelligence-act/deadlines-and-compliance-calendar.md): Track the exact EU AI Act dates, including entry into force on 1 August 2024, early obligations from 2 February 2025, GPAI obligations from 2 August 2025.
- [EU AI Act FAQ | Dates, High Risk, GPAI, Transparency, and Penalties](/artifacts/eu/artificial-intelligence-act/faq.md): Get grounded answers to common EU AI Act questions on application dates, high risk status, provider versus deployer roles, transparency.
- [EU AI Act GPAI and Foundation Model Obligations | Chapter V Guide](/artifacts/eu/artificial-intelligence-act/gpai-and-foundation-model-obligations.md): Understand EU AI Act obligations for general purpose AI model providers, including Article 53 documentation, copyright policy.
- [EU AI Act High Risk AI Use Cases by Industry | Annex III and Product Routes](/artifacts/eu/artificial-intelligence-act/high-risk-ai-use-cases-by-industry.md): See how EU AI Act high risk status appears across biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration.
- [EU AI Act High Risk Requirements Checklist | Articles 9 to 15 and Beyond](/artifacts/eu/artificial-intelligence-act/high-risk-requirements-checklist.md): Use a detailed high risk AI checklist covering Article 9 risk management, Article 10 data governance, Annex IV technical documentation, logging, instructions.
- [EU AI Act Penalties and Fines | Article 99 and GPAI Fine Exposure](/artifacts/eu/artificial-intelligence-act/penalties-and-fines.md): Understand EU AI Act penalty tiers, including Article 5 fines up to EUR 35,000,000 or 7 percent.
- [EU AI Act Prohibited AI Practices | Article 5 Screening Guide](/artifacts/eu/artificial-intelligence-act/prohibited-ai-practices.md): Screen AI systems against EU AI Act Article 5 prohibited practices, including manipulative and deceptive techniques, exploitation of vulnerabilities.
- [EU AI Act Requirements | Prohibited, High Risk, Transparency, and GPAI](/artifacts/eu/artificial-intelligence-act/requirements.md): Get a grounded overview of EU AI Act requirements across Article 5 prohibited practices, Article 6 and Annex III high risk systems.
- [EU AI Act Timeline and Phasing Roadmap | Practical Implementation Roadmap](/artifacts/eu/artificial-intelligence-act/timeline-and-phasing-roadmap.md): Follow a practical EU AI Act roadmap that aligns workstreams to the phased application dates for prohibited practices, AI literacy, GPAI obligations.
- [EU AI Act Transparency, Labeling, and User Disclosures | Article 50 Guide](/artifacts/eu/artificial-intelligence-act/transparency-labeling-and-user-disclosures.md): Implement EU AI Act Article 50 transparency duties for direct interaction notices, machine readable marking of synthetic outputs, deepfake disclosures.
- [EU AI Act vs ISO 42001 | What ISO 42001 Covers and What It Does Not](/artifacts/eu/artificial-intelligence-act/eu-ai-act-vs-iso-42001.md): Compare the EU AI Act with ISO/IEC 42001:2023. Learn where ISO 42001 helps with AI policy, roles, risk assessment, impact assessment, documented information.
- [EU AI Act vs NIST AI RMF | How to Use AI RMF Without Missing AI Act Duties](/artifacts/eu/artificial-intelligence-act/eu-ai-act-vs-nist-ai-rmf.md): Compare the EU AI Act with NIST AI RMF 1.0. Learn how the voluntary NIST AI RMF functions Govern, Map, Measure.

## Key dates for EU AI Act implementation

*AI Act Timeline*

Track the legal phases so inventory, Article 5 screening, Chapter V work, high risk controls, and transition cases are sequenced correctly.

## Which AI Act duties apply to your system or model

*AI Act Decision Flow*

Use the decision flow to route the system into the right obligation track, then open the supporting guides for detailed implementation.

*Next step*

## Turn EU AI Act Timeline, Decision Flow & Guides into an operational assessment workflow

EU AI Act Timeline, Decision Flow & Guides should be the shared entry point for your team. Route execution into Assessment Autopilot for live work and into Research Copilot when the artifact needs deeper research, evidence governance, or supporting analysis.

- Start from EU AI Act Timeline, Decision Flow & Guides and route the work by entity, product, team, or control owner.
- Use Assessment Autopilot to turn the guidance into owned tasks, evidence requests, and review checkpoints.
- Use Research Copilot to answer scope, timing, and interpretation questions with cited outputs.
- Move from artifact reading to accountable execution without rebuilding the guidance in separate files.

- [Open Assessment Autopilot](/solutions/assessment.md): Turn the guidance into owned tasks, evidence requests, and review checkpoints for EU AI Act Timeline, Decision Flow & Guides.
- [Open Research Copilot](/solutions/research-copilot.md): Answer scope, timing, and interpretation questions with cited outputs from the same artifact.
- **Download decision flow**: Share classification logic across product and legal teams.
- **Download timeline**: Align phased dates in your implementation plan.
- [Talk through EU AI Act Timeline, Decision Flow & Guides](/contact.md): Review your current process, evidence model, and next steps for EU AI Act Timeline, Decision Flow & Guides.

## Decision Steps

### STEP 1: Does the EU AI Act apply to your organisation and AI activity?

*Reference: Art. 2(1)*

- Providers placing on the market / putting into service AI systems or placing GPAI models on the Union market
- Deployers established or located in the Union
- Non-EU providers/deployers where the output is used in the Union
- Importers and distributors of AI systems; product manufacturers placing AI systems with their products under their name/trademark; authorised representatives of non-EU providers
- Affected persons located in the Union (rights and safeguards)

- **NO** EU AI Act likely does not apply
- **YES** Does your AI system or use-case fall under a prohibited AI practice?

### STEP 2: Does your AI system or use-case fall under a prohibited AI practice?

*Reference: Art. 5*

- Manipulative/deceptive techniques materially distorting behaviour causing (or likely causing) significant harm (Art. 5(1)(a))
- Exploitation of vulnerabilities (age/disability/socio-economic) causing (or likely causing) significant harm (Art. 5(1)(b))
- Social scoring leading to detrimental/unjustified or disproportionate treatment (Art. 5(1)(c))
- Criminal offence risk assessment of individuals based solely on profiling/personality traits (Art. 5(1)(d))
- Untargeted scraping to build or expand facial recognition databases (Art. 5(1)(e))
- Emotion recognition in workplace/education (narrow medical/safety exception) (Art. 5(1)(f))
- Biometric categorisation inferring sensitive attributes (Art. 5(1)(g))
- Real-time remote biometric ID in public spaces for law enforcement (narrow exceptions + safeguards) (Art. 5(1)(h))

- **YES** Stop: prohibited AI practice
- **NO** Are you placing a General-Purpose AI (GPAI) model on the Union market as a provider?

### STEP 3: Are you placing a General-Purpose AI (GPAI) model on the Union market as a provider?

*Reference: Art. 2(1)(a); Art. 3(3),(63)*

- GPAI model = broadly capable + integrable into downstream systems
- If you are only using a third-party model, continue on the AI-system path

- **YES** Is it a GPAI model with systemic risk?
- **NO** Is your AI system classified as high-risk?

### SYSTEMIC RISK: Is it a GPAI model with systemic risk?

- Systemic risk models have additional obligations (Art. 55)
- Commission can designate models ex officio; list is published and kept up to date (Art. 52(4)-(6))
- Commission Guidelines (C(2025) 5045 final) provide practical examples and classification guidance

- **YES** GPAI systemic-risk obligations
- **NO** GPAI model provider obligations

### STEP 4: Is your AI system classified as high-risk?

*Reference: Art. 6; Annex I; Annex III*

- High-risk if safety component/product under Annex I + third-party conformity assessment (Art. 6(1))
- High-risk if listed in Annex III (Art. 6(2))
- Annex III derogation: may be not high-risk if no significant risk + narrow task (Art. 6(3))
- Annex III is always high-risk if it performs profiling of natural persons (Art. 6(3))
- If you claim Annex III is not high-risk: document assessment + register (Art. 6(4); Art. 49(2))

- **YES** Are you the provider of the high-risk AI system (or did you become one via modifications/rebranding)?
- **NO** Does your AI system trigger AI Act transparency obligations?

### STEP 5: Are you the provider of the high-risk AI system (or did you become one via modifications/rebranding)?

*Reference: Art. 3(3); Art. 16; Art. 25*

- Providers carry conformity assessment, CE marking, documentation and QMS duties
- Importers/distributors/deployers may become providers if they rebrand, substantially modify, or change intended purpose

- **YES** High-risk AI provider obligations
- **NO** High-risk AI deployer obligations

### STEP 6: Does your AI system trigger AI Act transparency obligations?

*Reference: Art. 50*

- AI systems that interact with people (chatbots/assistants) must disclose the interaction (Art. 50(1))
- Generative AI outputs must be marked as AI-generated/manipulated (Art. 50(2))
- Deployers of emotion recognition or biometric categorisation must inform exposed persons (Art. 50(3))
- Deepfakes and public-interest AI-generated text have disclosure duties (Art. 50(4))

- **YES** Transparency obligations apply
- **NO** No high-risk / transparency trigger found

## Reference Information

### Scope & Exclusions (Quick)

- Excludes: national security; exclusively military/defence/national security use (Art. 2(3))
- Excludes: AI systems/models developed solely for scientific R&D (Art. 2(6))
- Excludes: personal non-professional use by natural persons (Art. 2(10))
- Open-source exception (limited): not applicable unless high-risk or Art. 5/50 systems (Art. 2(12))
- Sectoral EU product/consumer laws still apply (Art. 2(9))

### Key Roles & Definitions

- Provider: develops (or has developed) and places on market/puts into service under own name/trademark (Art. 3(3))
- Deployer: uses an AI system under its authority (Art. 3(4))
- GPAI model: capable of performing a wide range of distinct tasks; integrable downstream (Art. 3(63))
- Systemic risk: high-impact GPAI risk that can propagate at scale (Art. 3(65))
- Value chain: modifications or rebranding can make you the provider (Art. 25)

### Baseline Obligation: AI Literacy

- Providers and deployers must take measures to ensure sufficient AI literacy of staff/users operating AI on their behalf
- Consider technical knowledge, experience, education/training, and the context of use
- Applies even where your system is not high-risk

### Governance & Authorities

- AI Office (Union level): implementation/monitoring for AI systems and GPAI models (Art. 64; Def. Art. 3(47))
- European AI Board: Union-level governance and coordination (Art. 65)
- Advisory forum + scientific panel support implementation (Arts. 67-68)
- National competent authorities + single point of contact (Art. 70)

### Commission GPAI Guidelines (Scope)

- C(2025) 5045 final (18 July 2025): scope of Chapter V obligations
- Examples: what counts as a GPAI model; lifecycle; who is a provider placing on market
- Open-source exemptions: conditions on licence, lack of monetisation, public availability of parameters
- Annex: training compute estimation (for classification guidance)

### GPAI Code of Practice

- Voluntary tool to demonstrate compliance (Arts. 53(4), 55(2), 56)
- Chapters: Transparency, Copyright, Safety & Security
- Includes templates (e.g., Model Documentation Form) and practical measures

### Templates & Reporting (GPAI)

- Public Summary of Training Content template (Art. 53(1)(d))
- Model Documentation Form template (Code of Practice - Transparency chapter)
- Serious incidents reporting template (Art. 55(1)(c))

### Annex III (High-Risk Areas)

- Biometrics (remote ID, sensitive categorisation, emotion recognition)
- Critical infrastructure (as safety components)
- Education/vocational training (admission, evaluation, monitoring tests)
- Employment/workers management (recruitment, monitoring, termination, task allocation)
- Essential services/benefits (credit scoring, insurance pricing, emergency call triage)
- Law enforcement; migration/asylum/border; justice & democratic processes

### Annex III Derogation (Not High-Risk Claims)

- Annex III system can only be treated as not high-risk if it does not pose a significant risk of harm (incl. not materially influencing decision outcomes) and fits a narrow-task condition (Art. 6(3))
- Always high-risk if it performs profiling of natural persons (Art. 6(3))
- Provider must document the assessment and is subject to registration (Art. 6(4); Art. 49(2))

### Section 2 Requirements (High-Risk)

- Risk management system (Art. 9)
- Data & data governance (Art. 10)
- Technical documentation (Art. 11; Annex IV)
- Record-keeping/logging (Arts. 12 and 19)
- Transparency + instructions for use (Art. 13)
- Human oversight (Art. 14)
- Accuracy, robustness, cybersecurity (Art. 15)

### Responsibilities Along the Value Chain

- If you rebrand, substantially modify, or change intended purpose you can become the provider (Art. 25(1))
- Initial provider must cooperate and provide required info/technical access (Art. 25(2))
- Supplier/provider agreements should allocate info/access needed for compliance (Art. 25(4))

### Notified Bodies & Conformity Assessment

- Member States designate notifying authorities (Art. 28)
- Notified bodies must meet independence and competence requirements (Art. 31)
- Identification numbers and lists of notified bodies (Art. 35)
- Use notified bodies where required by the conformity route (Art. 43 context)

### Harmonised Standards & Common Specifications

- Applying harmonised standards can create a presumption of conformity for covered AI Act requirements/obligations (Art. 40(1))
- If harmonised standards are missing or insufficient, the Commission may adopt common specifications (Art. 41)
- Standards/common specs can reduce ambiguity for documentation, testing, and conformity assessment routes (Art. 40-43 context)

### Fundamental Rights Impact Assessment (FRIA)

- Required before deploying certain high-risk AI systems (Art. 27(1))
- Describe use context, affected groups, risks, human oversight, and mitigations (Art. 27(1)(a)-(f))
- Update when changes occur; notify market surveillance authority with a template (Art. 27(2)-(5))

### Code of Practice (AI-generated Content)

- AI Office-led code of practice supports Art. 50(2) and (4) compliance
- Working group 1 (providers): machine-readable marking + robustness/interoperability
- Working group 2 (deployers): disclosure of deepfakes and other transparency duties
- Drafting timeline targets readiness before Art. 50 obligations apply

## Possible Outcomes

### [PROHIBITED] Stop: prohibited AI practice

Do not place on the market / put into service / use in the Union

- Re-design the system and/or intended purpose to remove the prohibited practice
- Assess if a different use-case or safeguards move you out of Art. 5
- Document the decision and seek legal review for edge cases (e.g., law enforcement exceptions)

### [GPAI] GPAI model provider obligations

Documentation, downstream transparency, copyright policy, and training-content summary

- Technical documentation for AI Office / authorities (Art. 53(1)(a); Annex XI)
- Information for downstream system providers (Art. 53(1)(b); Annex XII)
- Copyright compliance policy incl. rights reservations (Art. 53(1)(c))
- Publish public summary of training content (Art. 53(1)(d))

### [GPAI (SYSTEMIC RISK)] GPAI systemic-risk obligations

Art. 53 + additional systemic-risk controls

- Model evaluation + adversarial testing to identify/mitigate systemic risks (Art. 55(1)(a))
- Assess and mitigate systemic risks at Union level (Art. 55(1)(b))
- Report serious incidents + corrective measures to AI Office without undue delay (Art. 55(1)(c))
- Ensure adequate cybersecurity for the model and supporting infrastructure (Art. 55(1)(d))

### [HIGH-RISK (PROVIDER)] High-risk AI provider obligations

Requirements + conformity assessment + registration + post-market monitoring

- Meet Section 2 requirements (risk mgmt, data governance, logs, transparency, human oversight, cybersecurity)
- Quality management system (Art. 17) + technical documentation (Art. 11; Annex IV)
- Conformity assessment (Art. 43) + EU DoC (Art. 47) + CE marking (Art. 48)
- Register in EU database (Art. 49) and run post-market monitoring (Art. 72) + incident reporting (Art. 73)

### [HIGH-RISK (DEPLOYER)] High-risk AI deployer obligations

Use per instructions, human oversight, monitoring, FRIA (some cases), and transparency to affected persons

- Use per provider instructions + assign competent human oversight (Art. 26(1)-(3))
- Input data quality under your control (Art. 26(4))
- Monitor + suspend and notify for risks/serious incidents (Art. 26(5))
- Inform individuals subject to Annex III decisioning systems (Art. 26(11)); perform FRIA where applicable (Art. 27)

### [TRANSPARENCY] Transparency obligations apply

Disclose AI interaction, label AI-generated content, and handle deepfakes

- Inform users when they interact with an AI system unless obvious (Art. 50(1))
- Inform people exposed to emotion recognition or biometric categorisation systems (Art. 50(3))
- Mark synthetic content outputs machine-readably and detectably (Art. 50(2))
- Disclose deepfakes; disclose AI-generated public-interest text unless editorial control applies (Art. 50(4))
- Provide info clearly and accessibly by first interaction/exposure (Art. 50(5))

### [BASELINE] No high-risk / transparency trigger found

Still in scope: keep core controls and monitor future classification changes

- Maintain AI literacy measures (Art. 4)
- Re-check classification when intended purpose, autonomy, or context changes
- Track Commission guidelines and standards: Annex III use cases and Art. 50 list can evolve

### [OUT OF SCOPE] EU AI Act likely does not apply

No Union nexus under Art. 2(1) (or an exclusion applies)

- Document why you are out of scope (facts + legal basis)
- Re-assess if output becomes used in the Union or you place systems/models on the Union market
- Other laws (GDPR, product safety, sector rules) may still apply

## EU AI Act Timeline

| Date | Event | Reference |
| --- | --- | --- |
| 2024-07-12 | AI Act published in Official Journal (OJ L) | Reg. (EU) 2024/1689 |
| 2024-08-01 | AI Act enters into force (20 days after publication) | Art. 113 |
| 2025-02-02 | Chapters I (General provisions) and II (Prohibited practices) apply | Art. 113(a) |
| 2025-05-02 | Codes of practice for GPAI should be ready (latest) | Art. 56(9) |
| 2025-08-02 | GPAI obligations + governance + notified bodies + penalties apply | Art. 113(b) |
| 2026-02-02 | Commission provides Art. 6 high-risk classification guidelines (latest) | Art. 6(5) |
| 2026-08-02 | AI Act applies (general application date) | Art. 113 |
| 2027-08-02 | Art. 6(1) (Annex I product/safety-component high-risk) + corresponding obligations apply | Art. 113(c) |

## Compliance Timeline

| Date | Event | Category | Reference |
| --- | --- | --- | --- |
| 2024-01-24 | Commission Decision establishes the European AI Office | Notified bodies & governance |  |
| 2024-07-12 | AI Act published in the Official Journal | Legislative milestones |  |
| 2024-08-01 | AI Act enters into force | Legislative milestones |  |
| 2025-02-02 | Chapters I and II apply (including prohibited AI practices) | Prohibitions |  |
| 2025-07-10 | General-Purpose AI Code of Practice published | GPAI |  |
| 2025-07-18 | Commission adopts guidelines on GPAI obligations scope | GPAI |  |
| 2025-08-02 | GPAI obligations and governance provisions apply | GPAI |  |
| 2025-09-01 | Consultation to develop guidelines and a Code of Practice (transparent AI systems) | Transparency & labelling |  |
| 2025-09-26 | Consultation on serious AI incident reporting interplay | Incident reporting & post-market |  |
| 2025-10-01 | Chairs and vice-chairs selection | Transparency & labelling |  |
| 2025-11-04 | Reporting template for serious incidents (GPAI systemic risk) published | Incident reporting & post-market |  |
| 2025-11-05 | Kick-off plenary (start of 1st drafting round) | Transparency & labelling |  |
| 2025-11-17 | 1st working group meetings | Transparency & labelling |  |
| 2025-12-05 | Template published for public summary of GPAI training content | GPAI |  |
| 2025-12-17 | First draft published | Transparency & labelling |  |
| 2026-01-12 | Working group meetings (start of 2nd drafting round) | Transparency & labelling |  |
| 2026-01-21 | Workshops (working groups 1 and 2) | Transparency & labelling |  |
| 2026-03-01 | Second draft published (TBC) | Transparency & labelling |  |
| 2026-04-01 | Working group meetings (TBC) | Transparency & labelling |  |
| 2026-05-01 | Closing plenary and final Code of Practice published | Transparency & labelling |  |
| 2026-08-02 | AI Act applies (main obligations start) | Legislative milestones |  |
| 2026-08-02 | Commission enforcement powers for GPAI enter into application | GPAI |  |
| 2027-08-02 | Article 6(1) and corresponding obligations apply | High-risk AI |  |
| 2027-08-02 | Existing GPAI providers must comply by this date | GPAI |  |

**Event details:**

- **2024-01-24 - Commission Decision establishes the European AI Office**: 24 January 2024: European Commission publishes the decision establishing the European AI Office.
- **2024-07-12 - AI Act published in the Official Journal**: 12 July 2024: Regulation (EU) 2024/1689 is published in the Official Journal (OJ L, 12.7.2024).
- **2024-08-01 - AI Act enters into force**: 1 August 2024: The EU AI Act enters into force (20 days after publication).
- **2025-02-02 - Chapters I and II apply (including prohibited AI practices)**: 2 February 2025: Chapters I and II apply under the AI Act entry into force and application rules.
- **2025-07-10 - General-Purpose AI Code of Practice published**: 10 July 2025: The General-Purpose AI (GPAI) Code of Practice is published as a voluntary tool to help providers meet AI Act obligations.
- **2025-07-18 - Commission adopts guidelines on GPAI obligations scope**: 18 July 2025: Commission finalises its guidelines on the scope of obligations for general-purpose AI models (C(2025) 5045 final).
- **2025-08-02 - GPAI obligations and governance provisions apply**: 2 August 2025: Chapter V (general-purpose AI) and selected governance provisions start to apply (per Article 113).
- **2025-09-01 - Consultation to develop guidelines and a Code of Practice (transparent AI systems)**: September 2025: Consultation to develop guidelines and a Code of Practice on transparent AI systems, plus a call for expression of interest to participate.
- **2025-09-26 - Consultation on serious AI incident reporting interplay**: 26 September 2025: Consultation referenced alongside serious incident reporting guidance and templates for AI incidents.
- **2025-10-01 - Chairs and vice-chairs selection**: October 2025: Eligibility checks and selection of applications for chairs and vice-chairs.
- **2025-11-04 - Reporting template for serious incidents (GPAI systemic risk) published**: 4 November 2025: Commission publishes a reporting template for serious incidents involving general-purpose AI models with systemic risk.
- **2025-11-05 - Kick-off plenary (start of 1st drafting round)**: 5 November 2025: Kick-off plenary; start of the first drafting round.
- **2025-11-17 - 1st working group meetings**: 17-18 November 2025: First working group meetings.
- **2025-12-05 - Template published for public summary of GPAI training content**: 5 December 2025: Commission publishes an explanatory notice and a template for the public summary of training content (Article 53(1)(d)).
- **2025-12-17 - First draft published**: 17 December 2025: Publication of the first draft.
- **2026-01-12 - Working group meetings (start of 2nd drafting round)**: 12 and 14 January 2026: Working group meetings; start of the second drafting round.
- **2026-01-21 - Workshops (working groups 1 and 2)**: 21-22 January 2026: Workshops for working groups 1 and 2.
- **2026-03-01 - Second draft published (TBC)**: March 2026 (TBC): Publication of the second draft; start of the final drafting round.
- **2026-04-01 - Working group meetings (TBC)**: April 2026 (TBC): Working group meetings.
- **2026-05-01 - Closing plenary and final Code of Practice published**: May-June 2026: Closing plenary; publication of the final Code of Practice.
- **2026-08-02 - AI Act applies (main obligations start)**: 2 August 2026: The AI Act applies in general (per Article 113).
- **2026-08-02 - Commission enforcement powers for GPAI enter into application**: 2 August 2026: Commission enforcement powers for obligations on providers of GPAI models enter into application (including fines).
- **2027-08-02 - Article 6(1) and corresponding obligations apply**: 2 August 2027: Article 6(1) and corresponding obligations apply (per Article 113).
- **2027-08-02 - Existing GPAI providers must comply by this date**: By 2 August 2027: Providers of GPAI models placed on the market before 2 August 2025 must comply, per Commission guidance.


---

[Privacy Policy](https://www.sorena.io/privacy) | [Terms of Use](https://www.sorena.io/terms-of-use) | [DMCA](https://www.sorena.io/dmca) | [About Us](https://www.sorena.io/about-us)

(c) 2026 Sorena AB (559573-7338). All rights reserved.

Source: https://www.sorena.io/artifacts/eu/artificial-intelligence-act
