The EU AI Act was proposed in April 2021, politically agreed upon in December 2023, and published in the Official Journal of the EU on 12 July 2024. It is a keystone regulation for the development and deployment of AI in the EU and around the world.
Following its publication, the Act has entered into force on 1 August 2024. As businesses across the EU and beyond brace for the impacts, understanding the timeline and the necessary steps to comply is crucial.
Understanding the Act’s Structure
The EU AI Act categorizes AI systems into four risk tiers:
- Unacceptable Risk: Systems that pose a direct threat to safety or fundamental rights, such as social scoring and real-time biometric identification, are banned.
- High Risk: These systems require stringent compliance measures and include application in critical infrastructure, education, employment and law enforcement
- Limited Risk: These systems have some obligations but are less regulated than high-risk systems
- Minimal Risk: These systems face the least regulation, but businesses are encouraged to adopt voluntary codes of conduct
A Timeline for Implementation
The timeline for the AI Act’s implementation spans several years, with critical milestones that entities must track and prepare for. These include:
Key Dates for Implementation
Date | Timeline | Description |
1 August 2024 | The AI Act officially enters into force 20 days after its publication in the Official Journal of the EU. | |
2 February 2025 | Six months after entry into force | Prohibitions on Unacceptable Risk AI become effective |
2 August 2025 | 12 months after entry into force | -Obligations for providers of general-purpose AI models commence -Member states must appoint competent authorities -Annual review of the list of prohibited AI systems by the European Commission |
2 February 2026 | 18 months after entry into force | -The European Commission implements post-market monitoring regulations |
2 August 2026 | 24 months after entry into force | -Obligations for high-risk AI systems, particularly those listed in Annex III (such as biometric systems, critical infrastructure, education, and employment), become effective -Member states must establish rules on penalties and set up at least one operational AI regulatory sandbox -The European Commission reviews and possibly amends the list of high-risk AI systems |
2 August 2027 | 36 months after entry into force | -Obligations for high-risk AI systems not listed in Annex III but intended as safety components of products come into effect -High-risk AI systems that must undergo third-party conformity assessments under existing EU laws (for example, toys, medical devices, and civil aviation security) are also covered. |
By the End of 2030 | Obligations for AI systems that are components of large-scale information technology systems established by EU law in areas like freedom, security, and justice (for instance, Schengen Information System) come into effect |
The Dangers of Non-Compliance
Failure to comply with the AI Act can result in severe consequences. Businesses may face restrictions on market access to the product, significant administrative fines, legal challenges, and loss of customer trust. Non-compliance can lead to administrative fines ranging from 35 million euro or 7% of global turnover to 7.5 million or 1.5% of turnover. Additionally, non-compliant AI systems may be banned from the EU market, impacting business operations and growth prospects.
Moreover, non-compliance can erode trust among customers, partners, and regulators, potentially leading to loss of business opportunities and competitive disadvantage. It is imperative for businesses to take the necessary steps now to ensure compliance and safeguard their operations.
Key Compliance Steps for the Next 6 Months
As the AI Act progresses, businesses in every sector must take steps to ensure they remain compliant and mitigate risks. Here are key actions to consider over the next few months:
- Identify the AI systems and General Purpose AI Models (GPAIMs) you are using and determine if they are in scope of the Act. Organizations should undertake an “AI mapping” exercise to determine whether they are developing, using, importing, or distributing AI systems and/or developing GPAIMs and in what capacity.
- Classify the AI systems and GPAIMs based on their risk category. The Act imposes different obligations depending on the classification.
- Start incorporating AI Act requirements into your contracts. Organizations should implement changes to contract terms, due diligence, and procurement processes to anticipate AI Act risk and meet compliance requirements.
- Start developing a risk management system for AI systems and GPAIMs. This system should assess the risks associated with these systems and identify measures to mitigate those risks.
- Start developing a human oversight system for AI systems and GPAIMs. This system should ensure that AI systems and GPAIMs are used in a responsible and ethical manner.
- Start developing a record-keeping system for AI systems and GPAIMs. This system should track their development, deployment, and use.
Businesses should also start considering the potential impact of the Act on their business. For example, the Act may require businesses to change their product development processes or provide additional information to customers about the AI systems and GPAIMs they use.
Finally, it is useful to consider participating in the EU AI Office AI Pact, which encourages voluntary implementation of key provisions ahead of the official application date. This proactive stance can demonstrate commitment to ethical AI practices and build trust with stakeholders.
What Are the Main Compliance Challenges?
Businesses face several key challenges when complying with the EU AI Act:
Determining Risk Levels and Compliance Requirements
One of the main challenges is correctly identifying which AI systems fall into high-risk categories and what specific compliance measures each classification entails. Misclassification can lead to non-compliance, resulting in fines and reputational damage. Implementing comprehensive training and regular audits is crucial to ensure staff correctly identify and classify AI systems according to the Act’s definitions.
Ensuring High-Quality, Unbiased Data
Businesses must ensure the data used in AI systems is of high quality and free from biases that could lead to discriminatory outcomes. Non-compliance with data quality and bias mitigation requirements can lead to legal and financial penalties and harm user trust. Developing robust data governance policies and employing advanced bias detection technologies is key.
Maintaining Detailed Documentation
Maintaining the detailed documentation required for transparency and accountability can be challenging. Inadequate documentation can obstruct regulatory audits and compliance verification processes. It is important to utilize digital tools to automate record-keeping and ensure all AI decision-making processes are logged and traceable.
Establishing Ongoing Monitoring
Establishing ongoing monitoring mechanisms to ensure continued compliance as AI systems evolve and new regulations come into effect is difficult. Failure to continuously monitor AI systems can lead to compliance lapses as systems and regulations change. Investing in compliance software and systems that can dynamically adapt is necessary.
Adapting to New Technical Requirements
Many aspects of the EU AI Act will be challenging to implement, especially in terms of technical documentation for the testing, transparency, and explanation of AI applications. Every AI application comes with its own business processes, impact, and risks, making a one-size-fits-all approach difficult.
Collaboration between CISOs, Legal, AI Development teams and other business leaders will be key to navigating the complexities of the EU AI Act. By taking the necessary steps toward effective AI governance in the coming months, companies can navigate the regulatory landscape, mitigate risks, and take advantage of the opportunities presented by this landmark legislation.
If you are unsure where to start, schedule a consultation with Elevate! Our Guardian AI Solution will ensure you have a centralized line of sight on your AI Governance, AI Risk Management and AI Regulatory compliance obligations.