Stay Ahead of the Curve for GenAI Regulation in FSI

Scaling AI Brad LaPoff

With the passing of the AI Act, the EU is once again on the frontier of global technology regulation.

In 2024, sophisticated financial firms are no strangers to regulatory and governance considerations around models in production, especially those employing AI. The FED, SEC, OCC, CFTC, EBA, and the many other regional and national regulators create a complex overlapping oversight regime. 

While firms are careful to adhere to all relevant jurisdictional rules, they pay special attention to aligning business policies to comply with what they see as the most strict or far reaching regulation in a given area. This is often straightforward for quantitative rules that govern things like capital ratios or underwriting standards. However, more qualitative rules governing business practices and use of technology often require a more subtle understanding of a regulation’s intention.

In this context, many banks and other financial firms are now in the process of digesting the new EU AI Act and are trying to discern what elements might represent such a new frontier regulatory obligation to follow. Like with the landmark EU GDPR regulation from 2008, this new AI Act has the EU stepping boldly into uncharted technology regulation, in a way that is precedent setting globally. Enforcement will begin in the next 12-24 months, and potential fines can be quite large (up to 35 million euros or 7% of annual worldwide turnover), adding to the urgency for financial firms to prepare for compliance now.

Specific Implications for FSI Firms

The stated intention of the Act is, broadly, to protect EU citizens from harmful commercial use of AI, and it lays out several specific requirements for firms to ensure that such harms are avoided.

AI systems must be categorized into one of four risk categories. Organizations must then evidence that they have taken the specific care and risk mitigation steps in the design and ongoing use of the AI system for that model category. Not only will this apply to new models, but firms will also be required to assign a risk category to all models currently in production, and take the same necessary mitigation measures. Most large FSI firms’ internal model risk management teams already employ some form of risk designation tiering. However, risk teams will inevitably have to spend some time reclassifying models according to the Act’s specific designation regime, as an internal designation methodology will be insufficient in terms of compliance with the regulation.

banking image

The Act also requires that firms explicitly record the “Intended Purpose” of each AI system before they start developing the model. While there is still some uncertainty around exact interpretation and possible enforcement, this requirement could represent a stricter focus on proper chronology than firms have been used to with existing regulatory standards. 

For instance, U.S. regulators regularly scrutinize firm model risk management (MRM) practices and important models, sometimes requiring intense audits and corrective actions (even high intensity “Matters Requiring Attention” or “Matters Requiring Immediate Attention” designations in the case of Fed inspections). However, firms are often given leniency to retroactively create or enhance documentation under such audits in order to cure issues. 

However, with the new AI Act, all high-risk models must comply. Compliance is necessary for placing on the market and signaled through a compulsory conformity assessment. Additionally, the AI Act creates “Post Market Monitoring (PMM),” obligations for any AI models in production, which require firms to continually monitor and validate that models continue to fall into the same original Risk Category and have the same Intended Purpose, or if they need to be reclassified. Again, firms run the risk of fines if they cannot evidence that appropriate documentation and risk mitigation steps were taken in the past and on a continual cadence.

While the EU is the first mover with this AI Act, the U.S. has also begun to signal that their own increased regulation on AI will soon follow. President Biden’s Executive Order laid out broad AI standards that we expect will eventually be codified by financial regulators. This mirrors how California and the broader U.S. quickly followed the EU’s GDPR with their own similar rules. Singapore interestingly has stood up a new governmental body called AI Verify Foundation which is focusing on the development of software designed to be applied to AI to detect if models are designed fairly or with specific bias issues. This differs from the EU’s approach which is more agnostic as to how models are technically constructed, and instead focuses more on impact to citizens.  

Guidance for FSI Firms

Given the importance of proper documentation at the proper time, we recommend that firms immediately learn about the Act’s requirements, investigate whether existing practices align to the requirements, and pay close attention to guidance and standards development as it evolves. Firms should begin documentation at the very start of any new model development, whenever models have a reasonable likelihood to make it to eventual production. Such abundance of causation in beefing up documentation immediately may pay dividends in the future when under scrutiny by EU regulators. The Dataiku team has already begun to build functionality into the platform to make it easy to adhere to new documentation requirements, including guiding users through a new EU AI Compliance solution to help categorize their AI systems according to risk, and evidence compliance with the corresponding requirements through the development and deployment process.

GenAI in particular will require extra attention given the novel risks associated with these models and the completely different ways in which performance can be evaluated compared to traditional machine learning (ML) models. There is an opportunity to preserve much of the model risk framework associated with traditional ML models by incorporating GenAI elements into such existing models already in production. For instance, a GenAI model can generate a novel natural language category such as a factor tag for a stock that can then be used downstream by a traditional ML trading model as a feature input. 

In this way, even though there is now a GenAI component in the overall model, the model performance of the trading model still can be evaluated in terms of whatever traditional metrics (e.g., RMSE, F1, or AUC, etc.) and performance bands that model risk management has already become comfortable with. However, this won't address the aforementioned needs to possibly reevaluate the risk score of a model given the added complexity. 

Firms Shouldn't Be Overconfident That They Are Ready to Comply With the EU AI Act

We acknowledge that financial firms are already some of the most heavily regulated businesses in the world, with sophisticated regulatory and model risk management organizations. However, it would be a mistake to overly rely on existing processes being sufficient to comply with this new regulatory regime. Overconfident and undermotivated firms run the risk of the same level of major fines that world leading tech firms were hit with in the wake of slow compliance to GDPR.  Banks and other FSI firms are well positioned to adapt to the new rules, but it will require motivated adjustment to their processes and thoughtful diligence around risk and documentation.

You May Also Like

5 New Dataiku Features to Streamline Your RAG Pipelines

Read More

Dataiku Is a Gartner Peer Insights Customers’ Choice

Read More

2025 Retail & CPG Trends: Hyper-Personalization, GenAI, & More!

Read More

Keep Track of All Your Models (Including LLMs) With Dataiku

Read More