What is the EU AI Act?
The EU Artificial Intelligence Act is a comprehensive regulatory framework governing the development, deployment, and use of AI systems within the European Union. First proposed in 2021 and finalized in 2024, the Act is expected to be fully enforced beginning in 2026, with phased implementation requirements leading up to that date.
The Act’s primary objective is to ensure that AI systems placed on or used within the EU market are safe, transparent, explainable, and respectful of fundamental rights, while still enabling innovation. Unlike sector-specific regulations, the AI Act applies across industries and technologies, making it one of the most sweeping digital governance laws ever enacted.
At its core, the AI Act introduces a risk-based regulatory model. Rather than treating all AI systems equally, it classifies them based on the level of risk they pose to individuals and society, and imposes obligations proportionate to that risk.
This approach reflects a recognition that AI is not inherently dangerous, but that certain uses—especially those affecting rights, safety, or access to essential services—require stronger oversight and controls.
EU AI Act risk categories
The EU AI Act risk levels dictate compliance obligations, banning certain harmful uses outright, imposing transparency and conformity requirements on high-risk systems, and encouraging voluntary codes of conduct for lower-risk applications. From a risk-based perspective, there are four EU AI Act risk categories: unacceptable, high, limited, and minimal.
Unacceptable risk: Certain AI practices are completely prohibited because they are considered incompatible with EU values and fundamental rights. EU AI Act unacceptable risk systems include those that manipulate human behavior in harmful ways, exploit vulnerable populations, or enable governments to engage in social scoring. EU AI Act unacceptable risk systems cannot be developed, sold, or used in the EU under any circumstances.
High-risk AI systems: EU AI Act high risk systems are permitted when strict requirements are met. Allowed high-risk systems are typically critical applications that handle sensitive data used in areas such as:
- Employment and workforce management
- Education and student assessment
- Creditworthiness and financial services
- Healthcare diagnostics and treatment
- Law enforcement and public safety
- Border control and immigration
- Critical infrastructure management
EU AI Act high risk systems are subject to extensive obligations, including risk assessments, data governance controls, human oversight, technical documentation, logging, transparency, and ongoing monitoring.
Limited-risk AI systems: Limited-risk systems must meet specific transparency obligations, such as informing users when they are interacting with an AI system (for example, chatbots or deepfake content).
Minimal-risk AI systems: Most AI applications fall into this category and face no new regulatory obligations beyond existing laws. Examples include AI used in gaming, photo enhancement, or spam filtering.
The majority of regulatory focus and internal audit attention will be on high-risk AI systems, as these carry the most significant compliance, reputational, and operational exposure.
Why the EU AI Act matters beyond Europe
One of the most important features of the EU AI Act is its extraterritorial scope. The Act applies not only to organizations established in the EU, but also to organizations outside the EU if:
- They sell AI systems on the EU market, or
- The outputs of their AI systems are used within the EU
These two considerations mean that any organization creating AI systems may be fully subject to the Act, even if it has no physical presence in Europe.
The Act mirrors the global impact of GDPR, which forced organizations worldwide to rethink data governance, privacy controls, and accountability models. The AI Act is expected to have a similar effect on AI governance. From a practical standpoint, this means non-EU organizations must assume that EU AI Act compliance is unavoidable if they serve EU customers, employees, patients, or users in any capacity involving AI.
How non‑EU organizations must respond
For organizations outside the EU, compliance with the AI Act requires more than minor policy updates. It demands structural changes to how AI systems are identified, governed, controlled, and monitored.
AI inventory and classification
The first step is developing a comprehensive inventory of AI systems in use across the organization, including:
- Internally developed models
- Third-party AI tools
- Embedded AI in vendor platforms
- Generative AI used by employees
- AI used indirectly through outsourcing arrangements
Each system must be assessed to determine whether it falls under the Act’s definition of AI and, if so, which risk category applies. The EU AI Act risk classification drives all downstream compliance obligations. Many organizations underestimate how pervasive unsanctioned AI already is in their operations. Without a reliable inventory, compliance is impossible.
Governance and accountability structures
The EU AI Act explicitly requires organizations to define clear accountability for AI risk. This includes ownership for:
- AI strategy and adoption
- Risk assessment and classification
- Compliance with regulatory requirements
- Ongoing monitoring and incident response
For non-EU organizations, this often means extending existing governance frameworks to include AI, rather than treating it as an IT or innovation concern.
Third‑party and supply chain risk
A significant portion of AI risk originates from vendors. Cloud providers, HR platforms, marketing tools, and security solutions increasingly rely on AI models that may qualify as high-risk under the Act. Many software vendors have been quick to release updates to their applications with AI-embedded features that customers may not be able to deactivate or explain.
Non-EU organizations must ensure that vendor contracts, due diligence processes, and ongoing monitoring comply with the EU AI Act’s requirements. Vendor contracts should include access to documentation, audit rights, and assurances regarding data quality, model governance, and regulatory compliance.
Documentation and transparency
The EU AI Act places heavy emphasis on documentation. Organizations must be able to demonstrate:
- How AI systems were designed and trained
- What data was used and why it is appropriate
- How risks were assessed and mitigated
- How human oversight is implemented
- How system performance and outcomes are monitored
For organizations accustomed to agile development and rapid deployment, this documentation burden represents a cultural shift. In the rush to get their new products to the market, documentation is often an afterthought.