3 Key Pillars to Safely Scaling AI Successfully: Breaking Down Pillar #3

Scaling AI David Talaga

In our previous posts of this series (here and here), we discussed the need to lay the AI and analytics foundation to create a governance framework. The objective is simple: Define roles and responsibilities over time to organize and orchestrate analytics and AI projects according to risk, value, and regulatory criteria. In the second step, we defined the key steps that mark out the industrialization process of analytics projects by involving the whole team — and not just one individual — according to the rules defined in the governance framework.

But all this alone cannot solve the problem of scaling. Scaling is about volume and the numbers. Now you need the “AI big picture” and some monitoring programs that will allow you to track not just one model but all your models, initiatives, and projects. That's what this third pillar is all about — overseeing analytics and AI projects without sacrificing innovation.

→ Watch the Full Webinar Here

Oversight for Whom? Who Exactly Needs to Oversee AI Initiatives?

The answer is straightforward: Whoever is working on the initiative, needs to have an overview of the analytics projects. 

First, this relates to the governance framework as we defined in Pillar #1. Governance rules, requirements, and processes must be embedded throughout the deployment of the analytics projects. The people who defined the governance must oversee the integrity of the governance framework throughout the deployment of the analytics projects.

Second, this also concerns the business owners. The business owners are the analytics project internal client and the first to be impacted if the project fails. They need to know when the project is in production, if it is delivering results, or if it needs to be retrained for lack of performance.

Third, this should include all the people directly involved in the project, i.e., data engineers, data scientists, and ML engineers. ITOps are not left out since they must be able to monitor the performance of AI initiatives.

Make Model Oversight Accessible to All

To do this, it is important to make the models’ lifecycle clearer and easier for everyone to understand. And this task is not an easy one. AI projects’ development often suffer from a lack of transparency. Models’  operations are neither visible nor explainable to users such as business owners or governance committee members. Whether you are a project stakeholder, a decision maker, or a supervisor, projects must remain transparent and the models must be kept understandable. 

Oversight for What? Defining the Purpose Behind Model Oversight

Making Oversight Flexible to Control What Matters

In our first two pillars, we assumed that all initiatives would be governed and have a workflow based on the AI Governance framework. But does it mean the organization has to sign off and control every single AI initiative? Or can we accept a less controlled governance on less impactful AI projects? Even if the company should keep full visibility on all AI projects, it does not mean controlling everything. Otherwise, the risk would be to kill all innovation by stifling the team’s creativity.

At Dataiku, we believe that the best approach is adaptive governance. Adaptive governance is the ability to select which of your AI initiatives need governance, according to rules defined in your AI Governance framework. 

This is a key criteria when it comes to scaling with AI and governance.

The EU Regulation as a Use Case for Adaptive Governance

We talked in our Pillar #1 about the regulation projects from some countries or organizations. The EU example calls for adaptive governance by categorizing AI initiatives into four distinct risk categories: 

  • Minimal risk AI initiatives are those with zero or negligible potential of harm. For an organization, they can be R&D projects that will not go to production and make decisions that can impact people’s lives. An enterprise could then decide not to govern them. It does not mean that they are under the radar, they are known as AI initiatives with no specific governance. Having the ability to make this decision to not govern aims to support teams’  agility and autonomy.
  • Limited risk AI initiatives have some potential of harm. In this case, governance will be needed and could potentially include audit procedures only to check that everything is done with full transparency.
  • High risk AI initiatives have serious potential of harm. In that case, the governance framework will be expected to have some approval procedures before sending this initiative to production or letting it make decisions that could notably impact people’s lives.
  • Prohibited AI initiatives are specifically identified use cases that should not be deployed.

adaptive governance

An organization using AI in Europe will potentially have AI initiatives that fall into each of these categories. Since the level of risk is not the same, related oversight requirements will be different and graduated. As a result, they will use adaptive governance as they won’t apply the same governance rules to each category.

"Always On" Automated Monitoring to Keep a Constant Eye on Selected Models' Health

At this point, the governance frameworks are defined, the workflows to build and deploy the AI initiatives are running, and the governed AI initiatives are chosen and known. But nothing is static, and your AI models are in constant motion: Models may drift, data may become obsolete, and an AI initiative may not meet the expected business impact over time. AI monitoring must be specific — according to project stakeholders — and there is no one size fits all because:

  • Business owners focus on risk and business impact indicators.  
  • Data scientists will be attached to performance and model health criteria. 
  • IT and operations people will monitor the allocation of IT resources related to model operations. 
  • Compliance and auditing staff will observe model integrity according to business rules.

Once persona-specific KPIs are in place, the monitoring should be “always on.” This means each profile should receive recurring alerts or alerts in case of incidents based on their metrics. Alert automation means more efficient teams that waste less time on model supervision.

In Summary

In this blog post, we covered the needs for continuous oversight for all. We have qualified the oversight stakeholders and their expectations because: 

  • Letting teams innovate and create value remains essential and organizations should get rid of a full control approach to embrace adaptive governance.
  • AI is not a one person job anymore, scaling with success means also monitoring according to profile-specific criteria. 
  • Teams have no time to lose — automated alerts and notifications are essential criteria to guarantee continuous oversight of analytics models and project health along the AI lifecycle.

You May Also Like

Explainable AI in Practice (In Plain English!)

Read More

Democratizing Access to AI: SLB and Deloitte

Read More

Secure and Scalable Enterprise AI: TitanML & the Dataiku LLM Mesh

Read More

Revolutionizing Renault: AI's Impact on Supply Chain Efficiency

Read More