As with all other key aspects of the economy, the global health crisis we are going through is having a deep impact on AI developments in organizations. In an environment compatible with remote work, COVID-19 acts as a catalyst for reinforced data usage: many companies need to develop a strong data-supported understanding of the new normal — and react accordingly. Yet, this shouldn’t overshadow other structuring trends happening in the background, starting with the emergence of a new regulatory framework which will deeply reshape how AI is scaled.
This blog post is the third of a series (check out the first one on France here and the second on Canada here) focusing on these directives, regulations, and guidelines — which are on the verge of being enforced — and their key learnings for AI and analytics leads.
Today, we are zooming in on the recently introduced “Artificial Intelligence Act” by the European Commission.* The European Commission is the first institution worldwide to propose a new legal framework to harmonize rules regarding AI use. This proposition will still need to go through the European Parliament and the European Council, yet it is likely to become the new gold standard for all organizations around the world building, using, and/or selling AI systems.
Here, we’re going to build on the insights we have gained from the first two articles. So far, national regulators seem to have relied on existing legislation — most of the time data protection regulations — to introduce AI rule enforcement. It’s a different game this time around! The European Commission (EC) is proposing a brand new regulatory framework to protect citizens from harmful AI. This is a first-of-its-kind initiative that will highly shape AI development and deployment standards around the world, similarly to how GDPR influenced data protection globally. This proposal should make other regulators more comfortable with taking bolder stances to govern AI.
*The European Commission is the executive branch of the European Union, responsible for proposing legislation, implementing decisions, upholding the EU treaties. and managing the day-to-day business of the EU.
Wait, Why Is AI Regulation Needed Again?
AI is a deeply transformative technology. Through the processing of large amounts of data, AI models can automatically generate high-value content, predictions, recommendations, or decisions on a wide range of topics (i.e., targeted ads, facial recognition, self-driving cars ).
Yet, this technology can be difficult to understand or control over time, and the data fed to AI systems can perpetuate biases and discriminations. These harms endanger the fundamental rights of EU citizens, such as non-discrimination. After three years of research and consultations, the EC is proposing a regulatory framework to the European Council and the European Parliament to address AI risks and to maximize its benefits.
How Will AI Be Regulated?
Upon the feedback gathered last year from their white paper, the EC has chosen a risk-based approach to regulate AI in the European market. The objective of such an approach is to assess the risk of any given product or service before and after it is launched to the market to ultimately prevent harm to EU citizens.
The EC proposal defines four risk categories with more or less strict compliance procedures. The rule of thumb is “the higher the risk, the stricter the rule.” We’ve outlined the categories below:
- Unacceptable risk: practices identified as a clear threat are banned (i.e., social scoring, facial recognition in public spaces, extreme nudging)
- High risk: practices identified as a potential threat will have to be demonstrated as safe (i.e., essential private and public services such as obtaining a loan, transportation infrastructure, educational training, human resources, border controls, justice administration)
- Limited risk: systems with limited threats, like chatbots, will be subjected to transparency obligations to ensure users make informed decisions (i.e. the user can then decide to continue or step back from using the application)
- Minimal risk: most AI systems fall into this category and are to be freely used.
Military uses of AI are excluded from the above. Please refer to the EC proposal for an extended definition of targeted practices.
The AI risk classification of the EC (source)
While providers and users of low-risk AI systems will be encouraged to adopt (non-legally binding) codes of conduct on use, high-risk AI systems providers and users will be required to undergo extensive reviews before products or services can be leveraged. Requirements include:
1. Risk assessment
2. Risk mitigation strategies
- Activity logging to ensure traceable results
- High dataset quality to minimize risk and discriminatory outcomes
- High levels of robustness, accuracy, and security
- Human oversight (system design and use) to minimize risk
- Clear and adequate user information
- Detailed documentation for compliance checks
Providers are also specifically responsible to report any major incident with a given system once it is commercialized.Not complying with the high-risk requirements would open the possibility of a 4% global annual revenue fee (capped at 20 million euros) and a 6% global annual revenue fee (capped at 30 million euros) in case of the banned AI practices (described above).
Rules for providers of high-risk AI systems (source)
Who Will Be Impacted by the AI Regulation?
- Any user of AI systems in the EU
- Any company selling AI products or services in the EU, whether or not they are based in the EU
- Any providers and users of AI systems that are located in an non-EU country but where the output produced by the system is used in the EU
How Will This Proposal Be Enforced?
The EC proposes a coordinated plan to ensure the implementation of the proposal once it is validated by the European Parliament and the European Council:
- All member states will put the regulation into practice by enacting their own laws. National regulators will be selected to interpret the European law for the national scope.
- The EC will oversee the coordination with member states and the high-risk system register while the European Artificial Intelligence Board (EAIB), a new supervisory authority suggested in the proposal, would oversee the enforcement (including post-market surveillance).
The regulation would first enter into force in the second half of 2022 to develop standards and governance structures before being fully implemented during the second half of 2024.
What Does This Framework Mean for Organizations Scaling AI?
Actually a lot! If binding regulation seems inevitable, what are the upcoming opportunities and roadblocks for organizations? Here are some insights to support your planning:
1. There are a lot of governance resources already available from national regulators. Indeed, they have been working for the last few years on the potential risks of AI and gone through extensive research and stakeholder consultations. See for example the work of the French Financial Services regulator, ACPR, in the first blog post of this series.
2. These resources need to be articulated within the company to build robust processes. Although we have observed the establishment of ethical principles or checklists during the last few years, most organizations have not yet entirely formalized or implemented their AI governance processes. It will be key to do so before starting to work on regulatory compliance, especially since governance is an opportunity to make your development and deployment processes more efficient!
3. The processes then need to be enforced. Identifying risks for all the AI systems across the organization and following the established processes in a consistent manner for each of them will be the main challenge. Ensuring that these efforts do not slow down AI development and its successful embedding in key business processes will be another — especially when we know how much is still to be done in this space. Having a platform approach with tier-one auditability and governance features such as the ones provided by Dataiku will be paramount to evolve in this new environment.
At Dataiku, we look at AI governance as an opportunity to build resilience by developing an operational model to support accelerated AI growth, eliminate silos between teams, and foster alignment and oversight transversally. Dataiku allows users to view the projects, models, and resources that are being used and how. The full range of documentation capabilities (wiki, tasks, model documentation generator) enable organizations to streamline the production of necessary auditing material. The capacity to easily organize audits and reviews of existing models and leverage a broad set of explainability and fairness tools supports fully intentional and controlled AI development.
While the EU regulation could be seen by a few as a blocker to AI, we are convinced that it will pave the road for sound AI development, supported by strong governance principles capable of inspiring trust in AI capabilities throughout industries.