---
title: "EU AI Act Compliance Program"
canonical_url: "https://www.sorena.io/artifacts/eu/ai-act/compliance"
source_url: "https://www.sorena.io/artifacts/eu/artificial-intelligence-act/compliance"
author: "Sorena AI"
description: "Build an EU AI Act compliance program that covers inventory, governance, AI literacy, prohibited practice gates, high risk controls, Article 50 product work."
published_at: "2026-03-04"
updated_at: "2026-03-04"
keywords:
  - "EU AI Act compliance program"
  - "AI Act implementation program"
  - "AI Act governance"
  - "AI Act operating model"
  - "EU AI Act post market monitoring"
  - "EU AI Act evidence pack"
  - "EU AI Act"
  - "Compliance program"
  - "Governance"
  - "High risk"
  - "GPAI"
---
**[SORENA](https://www.sorena.io/)** - AI-Powered GRC Platform

[Home](https://www.sorena.io/) | [Solutions](https://www.sorena.io/solutions) | [Artifacts](https://www.sorena.io/artifacts) | [About Us](https://www.sorena.io/about-us) | [Contact](https://www.sorena.io/contact) | [Portal](https://app.sorena.io)

---

# EU AI Act Compliance Program

Build an EU AI Act compliance program that covers inventory, governance, AI literacy, prohibited practice gates, high risk controls, Article 50 product work.

*EU AI Act* *Operating model*

## EU AI Act (Regulation (EU) 2024/1689) Compliance program

A workable AI Act program connects legal duties to product delivery and operations.

This page sets out how to structure governance, owners, evidence, and steady state reviews so compliance survives beyond one launch cycle.

A publication grade AI Act program is not a single policy and it is not only for legal teams. It is a cross functional operating model that decides scope, classifies obligations, turns those obligations into build work, and then keeps the resulting evidence current as systems, models, and markets change.

## Program architecture

Build the program around a shared AI inventory, a role matrix, and a classification workflow. Every system and model should move through the same logic: scope, Article 5 screening, high risk analysis, transparency analysis, GPAI analysis, and release controls.

Make one team accountable for the operating model, but keep execution distributed. Product owns intended purpose and release. Engineering owns implementation and evidence. Security owns threat and resilience controls. Legal and compliance own classification review and authority response.

- Single AI register across products, internal tools, and supplier dependencies.
- Formal review forum for ambiguous scope and classification decisions.
- Named owner for each evidence artifact and each release gate.
- Escalation path for prohibited practice, systemic risk, serious incident, and authority notice events.

## Phasing by legal date

The dates matter because the obligation mix changes over time. Entry into force was 1 August 2024. From 2 February 2025, the general provisions, prohibited practices, and AI literacy requirements apply. From 2 August 2025, Chapter V for GPAI, governance provisions, and penalties provisions begin to matter. The main application date is 2 August 2026, with later special dates for some product safety and transition scenarios.

A mature program turns each date into deliverables. That means owners, acceptance criteria, artifact templates, and review calendars tied to the applicable phase.

- Before 2 February 2025: inventory, Article 5 gate, AI literacy, and role mapping live.
- Before 2 August 2025: GPAI provider workflow, AI Office readiness, and supplier evidence demands live.
- Before 2 August 2026: high risk system controls, FRIA process, and transparency product components live.
- Before 2 August 2027 and 31 December 2030: transition cases tracked and revalidated.

## Core control stack

The control stack should be simple to describe and strong enough to survive challenge. The minimum set is inventory, classification, approvals, evidence retention, incident response, corrective action, and recurring review. Everything else should attach to those foundations.

Where possible, connect the AI Act program to existing release, security, procurement, and audit processes. Standalone AI governance tools often fail because they drift away from the systems that actually govern shipping.

- Release gate for Article 5, Article 50, and high risk checks.
- Supplier onboarding gate for model level documentation and incident terms.
- Evidence repository with version control and retention logic.
- Complaint, redress, and serious incident handling flow with named responders.
- Quarterly review of system changes, model upgrades, and open findings.

## High risk and GPAI specialist workstreams

Some systems need deeper specialist tracks. High risk systems need Annex IV planning, conformity work, human oversight design, logging design, post market monitoring, and, in certain contexts, FRIA and EU database registration. GPAI providers need model level documentation, copyright policy, training content summary publication, and systemic risk response readiness.

Do not bury these specialist tracks inside generic AI governance. They need named SMEs, templates, and evidence standards of their own.

- High risk workstream with provider and deployer playbooks.
- GPAI workstream with Article 53 to 55 artifact templates.
- Transparency workstream with design system components and QA evidence standards.
- Authority response workstream for requests, market surveillance, and corrective action.

## What good program evidence looks like

A strong program can prove both design intent and actual operation. That means training records, meeting outcomes, release gates, technical documentation, and monitoring results all point to the same current system state.

If your evidence is only narrative and not version linked, your program will look stronger on paper than it is in practice.

- Current inventory and role assignments.
- Decision records and signed approvals for major classification outcomes.
- Version linked product and model evidence, including release notes and test outputs.
- Incident and corrective action records that show the program is operating, not dormant.

*Recommended next step*

*Placement: after the compliance steps*

## Turn EU AI Act (Regulation (EU) 2024/1689) Compliance program into an operational assessment

Assessment Autopilot can take EU AI Act (Regulation (EU) 2024/1689) Compliance program from operationalizing the guidance into a tracked program to a reusable workflow inside Sorena. Teams working on EU AI Act (Regulation (EU) 2024/1689) can keep owners, evidence, and next steps aligned without copying this guide into separate documents.

- [Open Assessment Autopilot for EU AI Act (Regulation (EU) 2024/1689) Compliance program](/solutions/assessment.md): Start from EU AI Act (Regulation (EU) 2024/1689) Compliance program and turn the guidance into owned tasks, evidence requests, and review checkpoints.
- [Talk through EU AI Act (Regulation (EU) 2024/1689)](/contact.md): Review your current process, evidence gaps, and next steps for EU AI Act (Regulation (EU) 2024/1689) Compliance program.

## Primary sources

- [Regulation (EU) 2024/1689 (EU AI Act) - Official Journal](https://eur-lex.europa.eu/eli/reg/2024/1689/oj?ref=sorena.io) - Primary legal text for scope, obligations, annexes, transition rules, and penalties.
- [AI Act Service Desk - Implementation timeline](https://ai-act-service-desk.ec.europa.eu/en/ai-act/timeline/timeline-implementation-eu-ai-act?ref=sorena.io) - Official staging reference for phased application dates.
- [European Commission - Decision establishing the European AI Office](https://digital-strategy.ec.europa.eu/en/library/commission-decision-establishing-european-ai-office?ref=sorena.io) - Official governance source for AI Office responsibilities.
- [European Commission - Guidelines for providers of general purpose AI models](https://digital-strategy.ec.europa.eu/en/policies/guidelines-gpai-providers?ref=sorena.io) - Official guidance on Chapter V scope, timelines, and AI Office expectations.
- [European Commission - AI Act enters into force on 1 August 2024](https://commission.europa.eu/news-and-media/news/ai-act-enters-force-2024-08-01_en?ref=sorena.io) - Commission implementation overview and key dates.

## Related Topic Guides

- [EU AI Act Applicability and Roles | Provider, Deployer, Importer Guide](/artifacts/eu/artificial-intelligence-act/applicability-and-roles.md): Determine whether the EU AI Act applies, when output used in the Union brings a system into scope, and how to assign provider, deployer, importer.
- [EU AI Act Applicability Test | Scope, Role, and Obligation Routing](/artifacts/eu/artificial-intelligence-act/applicability-test.md): Run a practical EU AI Act applicability test that checks scope, exclusions, operator role, prohibited practices, high risk status, transparency triggers.
- [EU AI Act Checklist | Practical Compliance Checklist by Obligation](/artifacts/eu/artificial-intelligence-act/checklist.md): Use a detailed EU AI Act checklist covering inventory, role mapping, Article 5 screening, high risk controls, Article 50 disclosures, GPAI evidence, logging.
- [EU AI Act Deadlines and Compliance Calendar | Exact Dates and Workplan](/artifacts/eu/artificial-intelligence-act/deadlines-and-compliance-calendar.md): Track the exact EU AI Act dates, including entry into force on 1 August 2024, early obligations from 2 February 2025, GPAI obligations from 2 August 2025.
- [EU AI Act FAQ | Dates, High Risk, GPAI, Transparency, and Penalties](/artifacts/eu/artificial-intelligence-act/faq.md): Get grounded answers to common EU AI Act questions on application dates, high risk status, provider versus deployer roles, transparency.
- [EU AI Act GPAI and Foundation Model Obligations | Chapter V Guide](/artifacts/eu/artificial-intelligence-act/gpai-and-foundation-model-obligations.md): Understand EU AI Act obligations for general purpose AI model providers, including Article 53 documentation, copyright policy.
- [EU AI Act High Risk AI Use Cases by Industry | Annex III and Product Routes](/artifacts/eu/artificial-intelligence-act/high-risk-ai-use-cases-by-industry.md): See how EU AI Act high risk status appears across biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration.
- [EU AI Act High Risk Requirements Checklist | Articles 9 to 15 and Beyond](/artifacts/eu/artificial-intelligence-act/high-risk-requirements-checklist.md): Use a detailed high risk AI checklist covering Article 9 risk management, Article 10 data governance, Annex IV technical documentation, logging, instructions.
- [EU AI Act Penalties and Fines | Article 99 and GPAI Fine Exposure](/artifacts/eu/artificial-intelligence-act/penalties-and-fines.md): Understand EU AI Act penalty tiers, including Article 5 fines up to EUR 35,000,000 or 7 percent.
- [EU AI Act Prohibited AI Practices | Article 5 Screening Guide](/artifacts/eu/artificial-intelligence-act/prohibited-ai-practices.md): Screen AI systems against EU AI Act Article 5 prohibited practices, including manipulative and deceptive techniques, exploitation of vulnerabilities.
- [EU AI Act Requirements | Prohibited, High Risk, Transparency, and GPAI](/artifacts/eu/artificial-intelligence-act/requirements.md): Get a grounded overview of EU AI Act requirements across Article 5 prohibited practices, Article 6 and Annex III high risk systems.
- [EU AI Act Timeline and Phasing Roadmap | Practical Implementation Roadmap](/artifacts/eu/artificial-intelligence-act/timeline-and-phasing-roadmap.md): Follow a practical EU AI Act roadmap that aligns workstreams to the phased application dates for prohibited practices, AI literacy, GPAI obligations.
- [EU AI Act Transparency, Labeling, and User Disclosures | Article 50 Guide](/artifacts/eu/artificial-intelligence-act/transparency-labeling-and-user-disclosures.md): Implement EU AI Act Article 50 transparency duties for direct interaction notices, machine readable marking of synthetic outputs, deepfake disclosures.
- [EU AI Act vs ISO 42001 | What ISO 42001 Covers and What It Does Not](/artifacts/eu/artificial-intelligence-act/eu-ai-act-vs-iso-42001.md): Compare the EU AI Act with ISO/IEC 42001:2023. Learn where ISO 42001 helps with AI policy, roles, risk assessment, impact assessment, documented information.
- [EU AI Act vs NIST AI RMF | How to Use AI RMF Without Missing AI Act Duties](/artifacts/eu/artificial-intelligence-act/eu-ai-act-vs-nist-ai-rmf.md): Compare the EU AI Act with NIST AI RMF 1.0. Learn how the voluntary NIST AI RMF functions Govern, Map, Measure.


---

[Privacy Policy](https://www.sorena.io/privacy) | [Terms of Use](https://www.sorena.io/terms-of-use) | [DMCA](https://www.sorena.io/dmca) | [About Us](https://www.sorena.io/about-us)

(c) 2026 Sorena AB (559573-7338). All rights reserved.

Source: https://www.sorena.io/artifacts/eu/artificial-intelligence-act/compliance
