In the first webinar of our Dataiku AI Governance web series, Jacob Beswick, Director for AI Governance Solutions at Dataiku, discusses the EU AI Act’s main interventions and their implications for organizations. This piece aims to inform and engage readers on the critical aspects of the EU AI Act, offering insights into compliance, risk management, and strategic planning.
The EU AI Act: What Is It For?
The EU AI Act aims to protect EU citizens by promoting human-centric and trustworthy AI, ensuring a high level of health, safety, and fundamental rights protection while mitigating the harmful effects of AI systems and encouraging innovation. The Act strives to balance these objectives with creating opportunities for innovation and economic benefit from the use of AI.
Introducing the Risk Tiering System
The EU AI Act introduces a risk-based classification system for AI systems, categorizing them into four risk tiers:
- Unacceptable Risk Tier: AI systems with untenable potential for harm, including those designed to manipulate people subliminally or exploit protected characteristics. Examples include systems predicting criminal behavior or expanding facial recognition databases via unauthorized methods. AI systems that fall into this tier cannot be placed on the market.
- High-Risk Tier: AI systems with serious potential for harm, subject to the most stringent obligations before they can be put on the market. These include biometric identification, categorization and emotion recognition systems, AI in critical infrastructure, and AI used in employment and worker management, amongst other applications.
- Limited-Risk Tier: AI systems with some potential for harm, such as those generating or manipulating audio, video, text, or image content as well as chatbots.
- Minimal-Risk Tier: AI systems with no significant potential for harm, including spam filters and AI-enabled video games. These systems have no new obligations under the act and any AI Governance interventions are voluntary.
Obligations for General-Purpose AI Models
The Act delineates new responsibilities for general-purpose AI models and their providers. These obligations are crucial as general-purpose AI models can be applied in diverse contexts, posing unique regulatory challenges. Key requirements include:
- Maintaining up-to-date technical documentation.
- Publicly summarizing training content.
- Complying with copyright obligations.
Important: Downstream users that modify, fine-tune, and deploy general-purpose AI models must also adhere to these regulations.
Penalty Regime for Non-Compliance
Non-compliance with the EU AI Act can result in severe financial penalties, including maximum fines of up to €35 million or 7% of annual worldwide turnover for prohibited AI systems. High-risk AI systems can see fines of up to €15 million or 3% of annual worldwide turnover for noncompliance; additionally, fines can also hit where there is misleading or incorrect documentation provided for conformity assessments. This stringent penalty regime underscores the importance of compliance and proactive risk management for organizations.
What’s Next? Preparing for Compliance
Organizations must thoroughly understand the requirements of the EU AI Act, assess their impact, and implement necessary changes. While the high-risk tier has associated requirements, the Act also recommends observing these for limited- and minimal-risk AI systems voluntarily. The requirements at a high level include:
- Establishing Risk Management Systems: Continuous processes to identify, evaluate, and manage risks across the AI system's lifecycle.
- Ensuring Data Governance: Requirements for training, validation, and testing datasets, ensuring data relevance and compliance.
- Maintaining Comprehensive Technical Documentation: Detailed documentation proving the AI system's compliance, guided by Annex IV of the act.
- Implementing Record-Keeping Mechanisms: Automatic recording of events over the system's lifetime for post-market monitoring.
- Ensuring Transparency and Human Oversight: Designing systems for user interpretability and appropriate use, with clear instructions and human oversight capabilities.
- Securing AI Systems: Protecting against external threats and maintaining accuracy, robustness, and cybersecurity.
Conclusion
The EU AI Act represents a significant regulatory framework that balances innovation with protection. Organizations must prepare for compliance by mapping current and future AI assets, understanding risk levels, and planning strategically. The official publication and subsequent enforcement will bring these requirements into effect, necessitating proactive preparation.
This was, of course, just an introduction to the obligations of the EU AI Act. To stay ahead and ensure your organization is prepared to navigate the necessary governance mechanisms under the EU AI Act, we encourage you to watch the webinar replay and register for our upcoming session, “Key Pillars for Achieving EU AI Act Readiness,” in our AI Governance web series.