The EU AI Act represents the world's first comprehensive AI regulation. Understand its requirements and how to prepare your organization for compliance.
The EU AI Act entered into force on August 1, 2024, making it the world's first comprehensive legal framework for artificial intelligence. For organizations operating in or serving the European market, it is not optional — and its reach extends well beyond EU-headquartered firms. Any organization that deploys AI systems that affect people in the EU is subject to its requirements.
This article covers what the EU AI Act actually requires, which organizations are affected, and what compliance looks like in practice.
The EU AI Act organizes AI systems into four risk categories, each carrying different compliance obligations.
Unacceptable risk systems are prohibited outright. These include AI systems that manipulate human behavior through subliminal techniques, exploit vulnerabilities of specific groups, enable social scoring by public authorities, and (with limited exceptions) real-time biometric identification in public spaces.
High-risk systems carry the most significant compliance obligations. The Act defines high-risk AI across two categories: AI systems that are safety components of regulated products (medical devices, vehicles, industrial machinery), and AI systems used in eight specific application areas including biometric identification, critical infrastructure, education, employment, essential services, law enforcement, migration, and administration of justice.
Limited-risk systems — primarily chatbots and AI-generated content — carry transparency obligations: users must be informed they are interacting with AI.
Minimal-risk systems — the vast majority of AI applications — face no mandatory requirements under the Act, though voluntary codes of conduct are encouraged.
For organizations deploying high-risk AI systems, the EU AI Act imposes a substantial set of obligations.
Risk management system. Providers must establish, implement, document, and maintain a risk management system throughout the AI system lifecycle. This is not a one-time assessment — it is a continuous process.
Data governance. Training, validation, and testing datasets must meet quality criteria: relevance, representativeness, freedom from errors, and completeness. Data governance practices must be documented.
Technical documentation. Comprehensive documentation must be prepared before the system is placed on the market, covering system design, development methodology, testing results, and risk management measures.
Transparency and instructions for use. High-risk systems must be designed to allow deployers to understand what the system does and how to use it appropriately. Instructions for use must be provided.
Human oversight. High-risk systems must be designed to allow effective human oversight. This includes the ability to monitor, intervene, and override the system.
Accuracy, robustness, and cybersecurity. Systems must achieve appropriate levels of accuracy and be resilient to errors, faults, and adversarial manipulation.
Conformity assessment. Before deployment, high-risk systems must undergo a conformity assessment — either self-assessment (for most categories) or third-party assessment (for biometric identification and certain other categories).
Registration. High-risk AI systems must be registered in an EU-wide database before deployment.
Post-market monitoring. Providers must implement post-market monitoring systems and report serious incidents and malfunctions to national authorities.
The EU AI Act distinguishes between providers (those who develop or place AI systems on the market) and deployers (those who use AI systems in a professional context). Both carry obligations, though providers bear the heavier compliance burden.
For multinational organizations, the extraterritorial scope is significant. If an AI system is used by people in the EU — regardless of where the provider or deployer is headquartered — the Act applies. Canadian and US organizations serving European clients, operating European subsidiaries, or deploying AI that affects EU residents are within scope.
Financial services firms face particular exposure. AI systems used in credit scoring, insurance underwriting, employment screening, and customer service are likely to fall within the high-risk categories. The intersection of EU AI Act obligations with existing financial regulation (DORA, MiFID II, Solvency II) creates a complex compliance landscape that requires careful mapping.
The EU AI Act's obligations are being phased in over a 36-month period from August 2024.
For most regulated enterprises, August 2026 is the critical deadline. Organizations that have not begun compliance programs are already behind.
Effective EU AI Act compliance requires four foundational elements.
First, AI inventory and classification. Organizations must know what AI systems they operate, where they are deployed, and which risk category they fall into. Without a current, accurate inventory, compliance is impossible.
Second, gap assessment against high-risk requirements. For each system classified as high-risk, organizations must assess current state against the Act's technical and governance requirements and identify gaps.
Third, remediation roadmap. Gaps must be prioritized and addressed through a structured remediation program, with clear ownership and timelines tied to the compliance deadline.
Fourth, ongoing governance. The EU AI Act is not a one-time compliance exercise. Post-market monitoring, incident reporting, and documentation maintenance are continuous obligations. Governance structures must be built to sustain them.
Organizations that treat EU AI Act compliance as a documentation exercise will find themselves exposed. Those that build genuine governance capability — the ability to identify, assess, manage, and monitor AI risks — will be positioned to meet not just current requirements but the regulatory evolution that follows.
Aeon AI Risk Management
We help regulated enterprises build AI governance frameworks that satisfy regulators, protect the business, and enable responsible innovation.
Practical insights on AI governance frameworks, regulatory developments, and risk management — written for practitioners in regulated enterprises.
No spam. Unsubscribe at any time.