top of page
Writer's pictureJose Cruset

The EU Artificial Intelligence Act - guidelines for business leaders


The European Union has finalized its Artificial Intelligence Act (EU AI Act), a comprehen-sive piece of legislation designed to regulate AI systems within the EU market. This Act, the first (and until now, only) of its kind globally, aims to balance the immense potential of AI with the need to mitigate its inherent risks, ensuring its development and use align with European values and fundamental rights. This blog post serves as a guide for business leaders to understand the implications of the EU AI Act and prepare their companies for its enactment.


Scope and Coverage

The EU AI Act adopts a risk-based approach, categorizing AI systems into four levels of risk (Article 6 and Annex III): unacceptable, high, limited, and minimal.


  • Unacceptable Risk (Article 5): AI systems deemed to pose unacceptable risks are strictly prohibited. This includes manipulative or exploitative systems, social scoring by public authorities, and real-time remote biometric identification in publicly accessible spaces, except in specific, strictly defined law enforcement situations (Article 5(1)(h)). These prohibitions reflect the EU's commitment to fundamental rights and democratic values.


  • High Risk (Article 6 and Annex III): AI systems identified as high risk are subject to stringent requirements before they can be placed on the market, put into service, or used. This category includes AI systems used in critical infrastructure, educational and vocational training, employment, access to and enjoyment of essential services (including social security and credit scoring), law enforcement, migration, asylum and border control management, and the administration of justice. These systems require conformity assessments (Article 43), quality management systems (Article 17), technical documentation (Article 11), and post-market monitoring (Article 72), among other obligations. The specific annexes detail these requirements.


  • Limited Risk (Article 52): AI systems posing limited risk, primarily those with transparency obligations, such as chatbots, must clearly inform users that they are interacting with an AI system (Article 50(1)). This ensures informed consent and transparency in AI interactions.


  • Minimal Risk (Implicit): The majority of AI systems currently in use fall under minimal risk and face few regulatory requirements. This approach ensures that the Act does not stifle innovation in low-risk areas.


Key Implications for Companies

If your company develops, deploys, or distributes AI systems within the EU, or if your AI system's output is used within the EU, you will likely be affected by the AI Act (Article 2). Here’s what you need to know:


  • Determine the Risk Level of Your AI System: The first step is to assess the risk level of your AI system based on its intended purpose and potential impact. Use the criteria outlined in the AI Act and consider seeking legal advice for complex cases.


  • Compliance for High-Risk AI Systems: If your system is classified as high risk, prepare for significant compliance efforts. This involves establishing a robust risk management system (Article 9), ensuring data quality and governance (Article 10), developing comprehensive technical documentation (Article 11), implementing human oversight mechanisms (Article 14), and ensuring accuracy, robustness, and cybersecurity (Article 15). These requirements apply regardless of where your company is established (Article 2(1)(a)).


  • Transparency for Limited-Risk Systems: For limited risk systems, ensure transparency in your AI interactions by clearly informing users that they are interacting with an AI system. This simple measure fosters trust and avoids potential legal issues.


  • Exemptions: The AI Act exempts AI systems developed and used exclusively for scientific research and development (Article 2(6)) and national security or military purposes (Article 2(4)). However, if these systems are subsequently repurposed for civilian or other uses falling under the Act's scope, they will become subject to the corresponding regulations.


  • General-Purpose AI Models: Providers of general-purpose AI models, particularly those with systemic risks, face specific obligations regarding transparency (Article 53), documentation (Article 53(1)(a)), and risk mitigation (Article 55). These models are distinguished from AI systems themselves (Article 97), and their obligations apply once the models are placed on the market.


Timeline for Implementation

The EU AI Act is set to gradually come into effect, allowing companies time to adapt:


  • 2025: The foundational elements of the Act, including risk classification and basic transparency requirements, will go live in early 2025. At this point, companies should have their risk classification protocols in place.


  • 2026: High-risk AI compliance requirements, such as documentation, human oversight, and data management, will come into force by mid-2026. By this stage, companies using high-risk AI must have compliance mechanisms and documentation ready.


  • 2027: All remaining provisions, including periodic audits and ongoing data monitoring requirements, are expected to take effect by 2027. Companies should be fully compliant by then, ensuring they maintain records and can support an audit.


EU: Over-regulating and under-investing

The EU AI Act is another example of over-regulation. (see here for another example) The goal for these compliance requirements for high-risk systems is to contribute to developing robust, reliable, and ethical AI solutions. While the EU AI Act aims to foster responsible AI development, its stringent regulatory framework undeniably creates substantial burdens and costs for companies, especially SMEs and startups. The extensive compliance requirements, including conformity assessments, quality management systems, and ongoing monitoring, demand significant financial and human resources. The Center for Data Innovation estimates that the EU AI Act will cost European businesses 10.9 billion EUR per year. This increased overhead could stifle innovation and discourage investment in AI within Europe. Consequently, companies might be incentivized to relocate their AI development and deployment activities to regions with less stringent regulations, potentially leading to a "brain drain" of talent and investment away from Europe. This shift could undermine the Act's intended purpose, creating a global AI landscape fragmented by varying levels of ethical considerations and safety standards. To conclude with Emmanuel Macrons words who said in October '24: "We are over-regulating and under-investing. Just in the 2 to 3 years to come, if we follow our classical agenda, we will be out of the market. I have no doubt!"


Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating
bottom of page