---
title: "EU AI Act Checklist"
canonical_url: "https://www.sorena.io/artifacts/eu/ai-act/checklist"
source_url: "https://www.sorena.io/artifacts/eu/artificial-intelligence-act/checklist"
author: "Sorena AI"
description: "Use a detailed EU AI Act checklist covering inventory, role mapping, Article 5 screening, high risk controls, Article 50 disclosures, GPAI evidence, logging."
published_at: "2026-03-04"
updated_at: "2026-03-04"
keywords:
  - "EU AI Act checklist"
  - "AI Act compliance checklist"
  - "Article 5 checklist"
  - "Article 50 checklist"
  - "high risk AI checklist"
  - "GPAI checklist"
  - "Annex IV checklist"
  - "EU AI Act"
  - "Checklist"
  - "High risk"
  - "Article 50"
  - "GPAI"
---
**[SORENA](https://www.sorena.io/)** - AI-Powered GRC Platform

[Home](https://www.sorena.io/) | [Solutions](https://www.sorena.io/solutions) | [Artifacts](https://www.sorena.io/artifacts) | [About Us](https://www.sorena.io/about-us) | [Contact](https://www.sorena.io/contact) | [Portal](https://app.sorena.io)

---

# EU AI Act Checklist

Use a detailed EU AI Act checklist covering inventory, role mapping, Article 5 screening, high risk controls, Article 50 disclosures, GPAI evidence, logging.

*EU AI Act* *Checklist*

## EU AI Act (Regulation (EU) 2024/1689) Compliance checklist

A useful checklist is a list of outputs, owners, and review points.

This checklist is structured so teams can move from scoping to evidence instead of collecting generic policy language.

The AI Act does not reward broad statements about responsible AI. It rewards accurate classification, complete evidence, and operating controls that still work after launch. Use this checklist by system and by model, then track gaps in the same delivery tooling you use for security and product work.

## Portfolio and scoping foundation

Before you touch technical controls, make sure the system is registered, scoped, and assigned to the right role owners. Every later obligation depends on this layer being correct.

This is also the point where many teams catch shadow AI use, unapproved model APIs, and downstream resale arrangements that were never reflected in the compliance record.

- AI system register complete with intended purpose, owner, deployment context, and model dependencies.
- Role matrix complete for provider, deployer, importer, distributor, and authorised representative where relevant.
- Exclusions and open source position documented with article basis.
- Reassessment triggers documented for model changes, new uses, and substantial modification.

## Article 5 and AI literacy controls

The earliest live obligations are not optional extras. Prohibited practice screening and AI literacy should be operating controls, not one time training slides.

The screening outcome should be attached to release approvals and high impact design reviews.

- Article 5 screening completed before launch and on material changes.
- Escalation path exists for manipulative techniques, vulnerability exploitation, prohibited biometrics, and social scoring concerns.
- Article 4 AI literacy training is assigned to product, operations, support, and oversight roles.
- Training completion and role specific guidance are retained as evidence.

## High risk system controls

If a system is high risk, the evidence standard changes completely. You need a lifecycle assurance set built around Articles 9 to 15 and the provider and deployer duties that sit around them.

Do not stop at a risk register. High risk evidence has to cover technical documentation, human oversight, logging, post market monitoring, and authority facing records.

- Article 9 risk management system documented and maintained across the lifecycle.
- Article 10 data governance controls and data quality evidence complete.
- Article 11 Annex IV technical documentation plan complete and version linked to releases.
- Articles 12 to 15 logging, instructions, human oversight, accuracy, robustness, and cybersecurity controls evidenced.
- FRIA, affected person notices, EU database registration, conformity route, and CE marking checked where applicable.

## Transparency and user disclosure controls

Article 50 work belongs in product delivery, content operations, accessibility review, and analytics. If users interact with AI or receive synthetic content, the disclosure logic must be built into the interface and into the release process.

Where machine readable marking is required, product teams should treat it as a technical feature with QA evidence, not as marketing copy.

- Direct interaction notices mapped to all relevant product surfaces.
- Synthetic image, audio, video, and text marking triggers documented.
- Deepfake and public interest publication disclosures reviewed with editorial owners.
- Display evidence, screenshots, and privacy minimised logging retained.

## GPAI provider and supplier controls

Chapter V applies at model level. If you provide a GPAI model, you need technical documentation, downstream information, a copyright policy, and a public summary of training content. If you rely on a third party GPAI provider, you need contract and evidence demands that let your system level compliance continue to work.

Systemic risk scenarios need a higher state of readiness because notification, safety and security, and serious incident reporting duties tighten.

- Article 53 technical documentation and downstream documentation workflow in place.
- Copyright policy maintained and linked to data sourcing governance.
- Public summary of training content prepared using the AI Office template where required.
- Article 52 systemic risk notification path and Article 55 serious incident reporting process defined.
- Supplier clauses require version notices, technical documentation extracts, and incident cooperation.

## Authority ready evidence pack

Your final test is simple: if a buyer, regulator, or partner asks for evidence tomorrow, can you produce a coherent file without rebuilding the story from scratch?

The evidence pack should be kept current and tied to actual versions, not abstract program material.

- Decision records for applicability, high risk status, transparency, and GPAI duties.
- Release linked technical documentation, instructions, and test records.
- Incident, complaint, and corrective action workflow records.
- Named owners, last review dates, and next review triggers for every major artifact.

*Recommended next step*

*Placement: after the checklist block*

## Turn EU AI Act (Regulation (EU) 2024/1689) Compliance checklist into an operational assessment

Assessment Autopilot can take EU AI Act (Regulation (EU) 2024/1689) Compliance checklist from turning this checklist into an operational workflow to a reusable workflow inside Sorena. Teams working on EU AI Act (Regulation (EU) 2024/1689) can keep owners, evidence, and next steps aligned without copying this guide into separate documents.

- [Open Assessment Autopilot for EU AI Act (Regulation (EU) 2024/1689) Compliance checklist](/solutions/assessment.md): Start from EU AI Act (Regulation (EU) 2024/1689) Compliance checklist and turn the guidance into owned tasks, evidence requests, and review checkpoints.
- [Talk through EU AI Act (Regulation (EU) 2024/1689)](/contact.md): Review your current process, evidence gaps, and next steps for EU AI Act (Regulation (EU) 2024/1689) Compliance checklist.

## Primary sources

- [Regulation (EU) 2024/1689 (EU AI Act) - Official Journal](https://eur-lex.europa.eu/eli/reg/2024/1689/oj?ref=sorena.io) - Primary legal text for scope, obligations, annexes, transition rules, and penalties.
- [AI Act Service Desk - Implementation timeline](https://ai-act-service-desk.ec.europa.eu/en/ai-act/timeline/timeline-implementation-eu-ai-act?ref=sorena.io) - Official staging reference for phased application dates.
- [AI Act Service Desk - Article 50 transparency obligations](https://ai-act-service-desk.ec.europa.eu/en/ai-act/article-50?ref=sorena.io) - Official support page for interaction, disclosure, and deepfake duties.
- [European Commission - Guidelines for providers of general purpose AI models](https://digital-strategy.ec.europa.eu/en/policies/guidelines-gpai-providers?ref=sorena.io) - Official guidance on Chapter V scope, timelines, and AI Office expectations.
- [European Commission - Explanatory notice and template for the public summary of training content](https://digital-strategy.ec.europa.eu/en/library/explanatory-notice-and-template-public-summary-training-content-general-purpose-ai-models?ref=sorena.io) - Official Article 53(1)(d) template and implementation notice.

## Related Topic Guides

- [EU AI Act Applicability and Roles | Provider, Deployer, Importer Guide](/artifacts/eu/artificial-intelligence-act/applicability-and-roles.md): Determine whether the EU AI Act applies, when output used in the Union brings a system into scope, and how to assign provider, deployer, importer.
- [EU AI Act Applicability Test | Scope, Role, and Obligation Routing](/artifacts/eu/artificial-intelligence-act/applicability-test.md): Run a practical EU AI Act applicability test that checks scope, exclusions, operator role, prohibited practices, high risk status, transparency triggers.
- [EU AI Act Compliance Program | Build an Operational AI Act Program](/artifacts/eu/artificial-intelligence-act/compliance.md): Build an EU AI Act compliance program that covers inventory, governance, AI literacy, prohibited practice gates, high risk controls, Article 50 product work.
- [EU AI Act Deadlines and Compliance Calendar | Exact Dates and Workplan](/artifacts/eu/artificial-intelligence-act/deadlines-and-compliance-calendar.md): Track the exact EU AI Act dates, including entry into force on 1 August 2024, early obligations from 2 February 2025, GPAI obligations from 2 August 2025.
- [EU AI Act FAQ | Dates, High Risk, GPAI, Transparency, and Penalties](/artifacts/eu/artificial-intelligence-act/faq.md): Get grounded answers to common EU AI Act questions on application dates, high risk status, provider versus deployer roles, transparency.
- [EU AI Act GPAI and Foundation Model Obligations | Chapter V Guide](/artifacts/eu/artificial-intelligence-act/gpai-and-foundation-model-obligations.md): Understand EU AI Act obligations for general purpose AI model providers, including Article 53 documentation, copyright policy.
- [EU AI Act High Risk AI Use Cases by Industry | Annex III and Product Routes](/artifacts/eu/artificial-intelligence-act/high-risk-ai-use-cases-by-industry.md): See how EU AI Act high risk status appears across biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration.
- [EU AI Act High Risk Requirements Checklist | Articles 9 to 15 and Beyond](/artifacts/eu/artificial-intelligence-act/high-risk-requirements-checklist.md): Use a detailed high risk AI checklist covering Article 9 risk management, Article 10 data governance, Annex IV technical documentation, logging, instructions.
- [EU AI Act Penalties and Fines | Article 99 and GPAI Fine Exposure](/artifacts/eu/artificial-intelligence-act/penalties-and-fines.md): Understand EU AI Act penalty tiers, including Article 5 fines up to EUR 35,000,000 or 7 percent.
- [EU AI Act Prohibited AI Practices | Article 5 Screening Guide](/artifacts/eu/artificial-intelligence-act/prohibited-ai-practices.md): Screen AI systems against EU AI Act Article 5 prohibited practices, including manipulative and deceptive techniques, exploitation of vulnerabilities.
- [EU AI Act Requirements | Prohibited, High Risk, Transparency, and GPAI](/artifacts/eu/artificial-intelligence-act/requirements.md): Get a grounded overview of EU AI Act requirements across Article 5 prohibited practices, Article 6 and Annex III high risk systems.
- [EU AI Act Timeline and Phasing Roadmap | Practical Implementation Roadmap](/artifacts/eu/artificial-intelligence-act/timeline-and-phasing-roadmap.md): Follow a practical EU AI Act roadmap that aligns workstreams to the phased application dates for prohibited practices, AI literacy, GPAI obligations.
- [EU AI Act Transparency, Labeling, and User Disclosures | Article 50 Guide](/artifacts/eu/artificial-intelligence-act/transparency-labeling-and-user-disclosures.md): Implement EU AI Act Article 50 transparency duties for direct interaction notices, machine readable marking of synthetic outputs, deepfake disclosures.
- [EU AI Act vs ISO 42001 | What ISO 42001 Covers and What It Does Not](/artifacts/eu/artificial-intelligence-act/eu-ai-act-vs-iso-42001.md): Compare the EU AI Act with ISO/IEC 42001:2023. Learn where ISO 42001 helps with AI policy, roles, risk assessment, impact assessment, documented information.
- [EU AI Act vs NIST AI RMF | How to Use AI RMF Without Missing AI Act Duties](/artifacts/eu/artificial-intelligence-act/eu-ai-act-vs-nist-ai-rmf.md): Compare the EU AI Act with NIST AI RMF 1.0. Learn how the voluntary NIST AI RMF functions Govern, Map, Measure.


---

[Privacy Policy](https://www.sorena.io/privacy) | [Terms of Use](https://www.sorena.io/terms-of-use) | [DMCA](https://www.sorena.io/dmca) | [About Us](https://www.sorena.io/about-us)

(c) 2026 Sorena AB (559573-7338). All rights reserved.

Source: https://www.sorena.io/artifacts/eu/artificial-intelligence-act/checklist
