---
title: "EU AI Act High Risk Requirements Checklist"
canonical_url: "https://www.sorena.io/artifacts/eu/ai-act/high-risk-requirements-checklist"
source_url: "https://www.sorena.io/artifacts/eu/artificial-intelligence-act/high-risk-requirements-checklist"
author: "Sorena AI"
description: "Use a detailed high risk AI checklist covering Article 9 risk management, Article 10 data governance, Annex IV technical documentation, logging, instructions."
published_at: "2026-03-04"
updated_at: "2026-03-04"
keywords:
  - "EU AI Act high risk checklist"
  - "Article 9 Article 10 Article 11 Article 12 Article 13 Article 14 Article 15 checklist"
  - "Annex IV checklist"
  - "FRIA checklist"
  - "AI Act post market monitoring"
  - "EU AI Act"
  - "High risk"
  - "Articles 9 to 15"
  - "Annex IV"
  - "FRIA"
---
**[SORENA](https://www.sorena.io/)** - AI-Powered GRC Platform

[Home](https://www.sorena.io/) | [Solutions](https://www.sorena.io/solutions) | [Artifacts](https://www.sorena.io/artifacts) | [About Us](https://www.sorena.io/about-us) | [Contact](https://www.sorena.io/contact) | [Portal](https://app.sorena.io)

---

# EU AI Act High Risk Requirements Checklist

Use a detailed high risk AI checklist covering Article 9 risk management, Article 10 data governance, Annex IV technical documentation, logging, instructions.

*EU AI Act* *High risk*

## EU AI Act (Regulation (EU) 2024/1689) High risk checklist

High risk compliance is a lifecycle system with evidence at every stage.

This checklist covers both the core requirements and the surrounding provider and deployer duties that make them real.

High risk systems require more than principles. They require structured controls, technical and organizational evidence, and release discipline that remains valid after deployment. This page maps the key requirements to the outputs teams should be able to show.

## Articles 9 and 10: risk management and data governance

Article 9 requires a continuous and iterative risk management system. It should cover known and reasonably foreseeable risks across the lifecycle, not only risks at initial design. Article 10 requires data governance and data quality measures appropriate to the system and its intended purpose.

This is where many high risk programs rise or fall. If the training, validation, and testing record is thin, later claims about performance and oversight will also be weak.

- Lifecycle risk register linked to design changes, testing, and post market events.
- Documented assumptions about intended purpose, users, and environment.
- Training, validation, and testing data governance evidence, including relevance and known limitations.
- Bias, representativeness, and error handling controls appropriate to the use case.

## Articles 11 to 13: technical documentation, logging, and instructions

Article 11 requires technical documentation with the Annex IV content needed to demonstrate compliance. Article 12 requires automatic logging capabilities. Article 13 requires instructions for use that let deployers understand capabilities, limitations, expected performance, and required oversight.

This is not paperwork for paperwork sake. These materials allow deployers, notified bodies, and authorities to understand how the system should be used and where the limits are.

- Annex IV plan complete and linked to the specific system version.
- Logging design supports traceability, incident review, and required retention.
- Instructions for use include intended purpose, precluded uses, performance limits, and oversight needs.
- Provider and deployer teams agree on how instructions are operationalized in production.

## Articles 14 and 15: human oversight, accuracy, robustness, and cybersecurity

Article 14 requires human oversight measures that fit the system and its context. The assigned natural persons need competence, training, authority, and practical means to intervene. Article 15 requires an appropriate level of accuracy, robustness, and cybersecurity in light of intended purpose and state of the art.

Oversight and performance should be designed together. A strong oversight model fails if operators cannot understand outputs or if the system degrades without usable monitoring.

- Oversight role assigned to trained persons with authority to pause or stop use.
- Escalation and fallback actions documented for abnormal outputs or degraded performance.
- Accuracy, robustness, and cybersecurity acceptance criteria defined and tested.
- Known limitations, failure modes, and residual risks documented for deployers.

## Provider and deployer duties around the core requirements

Core requirements only work if the surrounding duties are also in place. Providers need quality management, conformity assessment, retention, registration, and post market monitoring. Deployers need to use the system according to instructions, assign oversight, keep logs under their control, inform affected persons in relevant cases, and perform FRIA where Article 27 requires it.

This is why high risk readiness is always broader than Articles 9 to 15 alone.

- Technical documentation retained for 10 years where the Act requires it.
- Logs under provider or deployer control retained for at least six months unless other law changes the period.
- Article 49 registration checked for relevant Annex III systems.
- Article 27 FRIA workflow active for public bodies, private entities providing public services, and the listed Annex III finance cases.
- Affected person notices and complaint handling path defined where decisions concern natural persons.

## Release and post market checklist

A high risk system is not done at first release. Post market monitoring, serious incident handling, corrective action, and change review all need named owners and templates. Learning systems and major updates also need careful review to determine whether the change was already assessed or becomes substantial.

Your release checklist should therefore be paired with a steady state monitoring checklist.

- Conformity route confirmed and completed before placement on the market or putting into service.
- CE marking applied where required.
- Post market monitoring plan created and linked to production telemetry and support channels.
- Serious incident and corrective action workflow tested.
- Material changes reviewed against substantial modification criteria.

*Recommended next step*

*Placement: after the checklist block*

## Turn EU AI Act (Regulation (EU) 2024/1689) High risk checklist into an operational assessment

Assessment Autopilot can take EU AI Act (Regulation (EU) 2024/1689) High risk checklist from turning this checklist into an operational workflow to a reusable workflow inside Sorena. Teams working on EU AI Act (Regulation (EU) 2024/1689) can keep owners, evidence, and next steps aligned without copying this guide into separate documents.

- [Open Assessment Autopilot for EU AI Act (Regulation (EU) 2024/1689) High risk checklist](/solutions/assessment.md): Start from EU AI Act (Regulation (EU) 2024/1689) High risk checklist and turn the guidance into owned tasks, evidence requests, and review checkpoints.
- [Talk through EU AI Act (Regulation (EU) 2024/1689)](/contact.md): Review your current process, evidence gaps, and next steps for EU AI Act (Regulation (EU) 2024/1689) High risk checklist.

## Primary sources

- [Regulation (EU) 2024/1689 (EU AI Act) - Official Journal](https://eur-lex.europa.eu/eli/reg/2024/1689/oj?ref=sorena.io) - Primary legal text for scope, obligations, annexes, transition rules, and penalties.
- [AI Act Service Desk - Annex III high risk use cases](https://ai-act-service-desk.ec.europa.eu/en/ai-act/annex-3?ref=sorena.io) - Official annex landing page for high risk use cases.
- [AI Act Service Desk - Article 27 fundamental rights impact assessment](https://ai-act-service-desk.ec.europa.eu/en/ai-act/article-27?ref=sorena.io) - Official support page for deployer FRIA duties.
- [AI Act Service Desk - AI Act Explorer](https://ai-act-service-desk.ec.europa.eu/en/ai-act-explorer?ref=sorena.io) - Official article and annex navigator maintained for implementation support.

## Related Topic Guides

- [EU AI Act Applicability and Roles | Provider, Deployer, Importer Guide](/artifacts/eu/artificial-intelligence-act/applicability-and-roles.md): Determine whether the EU AI Act applies, when output used in the Union brings a system into scope, and how to assign provider, deployer, importer.
- [EU AI Act Applicability Test | Scope, Role, and Obligation Routing](/artifacts/eu/artificial-intelligence-act/applicability-test.md): Run a practical EU AI Act applicability test that checks scope, exclusions, operator role, prohibited practices, high risk status, transparency triggers.
- [EU AI Act Checklist | Practical Compliance Checklist by Obligation](/artifacts/eu/artificial-intelligence-act/checklist.md): Use a detailed EU AI Act checklist covering inventory, role mapping, Article 5 screening, high risk controls, Article 50 disclosures, GPAI evidence, logging.
- [EU AI Act Compliance Program | Build an Operational AI Act Program](/artifacts/eu/artificial-intelligence-act/compliance.md): Build an EU AI Act compliance program that covers inventory, governance, AI literacy, prohibited practice gates, high risk controls, Article 50 product work.
- [EU AI Act Deadlines and Compliance Calendar | Exact Dates and Workplan](/artifacts/eu/artificial-intelligence-act/deadlines-and-compliance-calendar.md): Track the exact EU AI Act dates, including entry into force on 1 August 2024, early obligations from 2 February 2025, GPAI obligations from 2 August 2025.
- [EU AI Act FAQ | Dates, High Risk, GPAI, Transparency, and Penalties](/artifacts/eu/artificial-intelligence-act/faq.md): Get grounded answers to common EU AI Act questions on application dates, high risk status, provider versus deployer roles, transparency.
- [EU AI Act GPAI and Foundation Model Obligations | Chapter V Guide](/artifacts/eu/artificial-intelligence-act/gpai-and-foundation-model-obligations.md): Understand EU AI Act obligations for general purpose AI model providers, including Article 53 documentation, copyright policy.
- [EU AI Act High Risk AI Use Cases by Industry | Annex III and Product Routes](/artifacts/eu/artificial-intelligence-act/high-risk-ai-use-cases-by-industry.md): See how EU AI Act high risk status appears across biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration.
- [EU AI Act Penalties and Fines | Article 99 and GPAI Fine Exposure](/artifacts/eu/artificial-intelligence-act/penalties-and-fines.md): Understand EU AI Act penalty tiers, including Article 5 fines up to EUR 35,000,000 or 7 percent.
- [EU AI Act Prohibited AI Practices | Article 5 Screening Guide](/artifacts/eu/artificial-intelligence-act/prohibited-ai-practices.md): Screen AI systems against EU AI Act Article 5 prohibited practices, including manipulative and deceptive techniques, exploitation of vulnerabilities.
- [EU AI Act Requirements | Prohibited, High Risk, Transparency, and GPAI](/artifacts/eu/artificial-intelligence-act/requirements.md): Get a grounded overview of EU AI Act requirements across Article 5 prohibited practices, Article 6 and Annex III high risk systems.
- [EU AI Act Timeline and Phasing Roadmap | Practical Implementation Roadmap](/artifacts/eu/artificial-intelligence-act/timeline-and-phasing-roadmap.md): Follow a practical EU AI Act roadmap that aligns workstreams to the phased application dates for prohibited practices, AI literacy, GPAI obligations.
- [EU AI Act Transparency, Labeling, and User Disclosures | Article 50 Guide](/artifacts/eu/artificial-intelligence-act/transparency-labeling-and-user-disclosures.md): Implement EU AI Act Article 50 transparency duties for direct interaction notices, machine readable marking of synthetic outputs, deepfake disclosures.
- [EU AI Act vs ISO 42001 | What ISO 42001 Covers and What It Does Not](/artifacts/eu/artificial-intelligence-act/eu-ai-act-vs-iso-42001.md): Compare the EU AI Act with ISO/IEC 42001:2023. Learn where ISO 42001 helps with AI policy, roles, risk assessment, impact assessment, documented information.
- [EU AI Act vs NIST AI RMF | How to Use AI RMF Without Missing AI Act Duties](/artifacts/eu/artificial-intelligence-act/eu-ai-act-vs-nist-ai-rmf.md): Compare the EU AI Act with NIST AI RMF 1.0. Learn how the voluntary NIST AI RMF functions Govern, Map, Measure.


---

[Privacy Policy](https://www.sorena.io/privacy) | [Terms of Use](https://www.sorena.io/terms-of-use) | [DMCA](https://www.sorena.io/dmca) | [About Us](https://www.sorena.io/about-us)

(c) 2026 Sorena AB (559573-7338). All rights reserved.

Source: https://www.sorena.io/artifacts/eu/artificial-intelligence-act/high-risk-requirements-checklist
