3 Key Pillars to Safely Scaling AI Successfully: Breaking Down Pillar #1

Scaling AI Jacob Beswick

When we discuss AI scaling at Dataiku, we like to think of the opportunities that scaling affords our customers as well as the vehicles for ensuring those opportunities are realized. In short, we want to answer: ‘So if scaling can be impactful, how can we help to make it happen?’

One such vehicle is improving operations by embedding risk management and responsible practices, ensuring those are translated into effective operations, and building confidence across an organization that management and operations are effectively overseen and monitored. Below you will see our visual model of AI Governance, MLOps, and Oversight which pulls these pieces together (and all of which are discussed in our webinar, “3 Key Pillars to Safely Scaling AI Successfully”).

→ Watch the Full Webinar Here

3 pillars framework

In this blog post, we’ll be focusing on the first segment: AI Governance.  At Dataiku, we define AI Governance as:

An operational framework that enables the centralization, prioritization, and standardization of rules, requirements, and processes that shape how AI is designed, developed, and deployed in an organization.

What Drives AI Governance: Answering the 'Why?' of AI Governance

Part 1: Business and Values-Based Priorities: AI Value and Responsible AI

The rules, requirements, and processes within an AI Governance framework should be designed to help realize established goals. As such, they may be informed by a number of internally and externally-driven priorities. 

AI Governance framework

On the left, you’ll see business-led priorities. We think of these in terms of supporting the realization of AI Value.

AI Value: The expected added value to the business by deploying AI.

These might include an AI ambition or strategy set at a leadership level. Think of when a CEO puts pen to paper on investing in digitalization and leveraging data towards business objectives — whether that’s delivering existing products and services better or expanding. These kinds of ambitions are generally the material concerns of an AI strategy that lines up to a business or growth strategy; and these topics have occupied many webpages, conferences, and expo halls.

But in addition to looking at the strategic level, AI Governance sets direction on best practices that have material impact on meeting objectives and that usefully inform and are informed by an organizational structure where new responsibilities and processes are created. This is where AI Governance facilitates control over AI development and use, creates means for accountability, and ultimately aligns means and ends.

On the right, you’ll see values-based considerations. We think of these in terms of Responsible AI.

Responsible AI: At Dataiku, we define Responsible AI as an approach through which an organization grows the value of their AI pipelines by checking for and proactively mitigating unintended consequences concerned with fairness, bias, accountability, transparency, and explainability. In this way, it introduces techniques and tooling into MLOps processes that help to realize values-based priorities established through the AI Governance framework.

Many companies have established ethical or values positions, possibly articulated in a long-established corporate policy or in a recent innovation that reflects developments around AI principles. In addition to this internal driver, there is a complementary external one. Namely, guidance or regulation on AI that are today shaping behaviors and expectations around how AI should be developed, deployed, and monitored.

A critical counterpart to both AI Value and Responsible AI, and which an AI Governance framework is principally concerned with, is identifying, understanding, and mitigating risks. 

AI Risk: The likelihood that there will be some deviation from what is expected or intended across the AI lifecycle, from business initiative or challenge identification, to development, through to rollout.

An AI Governance framework should be driven by awareness of and commitment to identifying and mitigating AI risks. AI risks can adversely affect both the business-led priorities mentioned as well as the values-based ones. Unaddressed AI risks can have various impacts, from operations, to meeting priorities, and even in terms of PR.

Part 2: The World of AI Regulations: Externally-Set Values and Requirements

AI risks can be set internally to align with organizational values and priorities on the one hand; and, on the other, expectations may be set externally by regulation and non-legislative soft law (like codes of conduct or guidance). 

Identifying, understanding, and mitigating risks is the bread and butter of AI Governance. This matters because unmitigated risks can undermine efforts to leverage AI to meet strategic ambitions, and outcomes of unmitigated risks in this context could extend from removing deployments, to financial consequences through penalties, to reputational consequences.

Across geographies, governments are progressing regulation and non-legislated guidance explicitly concerned with governing AI. But not all geographies are advanced to the same level. Below is an indicative guide of maturity or sophistication of what’s been developed in terms of new requirements and soft law by geography:

regulation by country

These externally-established requirements are evolving and expectations are being set so that organizations developing, selling, or using AI should be prepared to adapt to new regulatory regimes that formalize risk frameworks and push new practices around monitoring, assessment, revision, and reporting. 

What’s more, most if not all of these developments share a starting point: values-based considerations around respecting rights, ensuring human-centricity, and preventing harm where AI is used. To these ends, most refer to opportunities to build transparency, accountability, ensuring fairness and mitigating bias, and so forth.  

While AI regulation is evolving, this shouldn’t be cause for complacency. Building and embedding an AI Governance framework puts organizations in a strong starting position to adapt to these new regimes as they are formalized. 

So How Is This Practical? Balancing the Social and Technological 

I haven’t gotten to the crux of an important consideration of AI Governance: namely, the balance between the social and technological components leveraged in following rules and processes, and meeting requirements. 

Social: Social components of AI Governance include setting the necessary rules, processes, and requirements. But it also includes, for example, creating roles with responsibilities, establishing critical meeting points, and so forth.


Technological: Technological components of AI Governance refer to tools that are used to ensure AI Governance is effectively delivered. This could be a spreadsheet approach where information fields are codified and must be filled in by teams over the AI lifecycle. But technological components can also be more dynamic and accommodating (see Dataiku Govern).


Balance: Balance refers to how an organization leverages both social and technological components to achieve their AI Governance obligations. At extremes, a social-only approach might mean minimal documentation, meetings for all reviews and sign-offs, and a reliance on human memory and communication. At the opposite end, a technology-only approach might mean relying on a platform to input all information, where parameters are codified and models tested, sign-offs and inspection automated. Living at either extreme, we think, means living with limitations.

One option is to undertake AI Governance through frequent meetings and extensive logs on spreadsheets, but we’ve found that this can be inefficient, a bottleneck, and not conducive to scaling.  Instead, at Dataiku we have envisioned a better way to implement AI Governance, which builds the right tooling to support social components of good AI Governance. I’ll walk through a fictional case study where roles and value and the balance between social and technological solutions of AI Governance frameworks are highlighted.

A (Fictional) Case Study:

Here we have a firm that provides optimization solutions for hospitals’ back-office operations. 

hospital back office

Aspiration: The firm’s CEO has made AI a cornerstone of their digital strategy. This ambition is behind moving the company’s product from CRM tools to prediction and optimization solutions that are focused on enabling hospitals of all sizes in all locations to more efficiently allocate doctors’ time which has both productivity and direct financial impacts.

State of Play: The C-suite hired the right staff who produce and deploy technically high-performing models. There is no AI Governance framework in place. Aside from growing sales, the firm hasn’t clarified what matters: fair and nondiscriminatory outcomes for their customers’ patients, for example. Without this clarity, the firm prioritizes efficient ops without accounting for risks and how to deal with them and their potential blowback. 

Risks: Without an AI Governance framework, the technically high-performing models’ risks, including unanticipated impacts, are not considered. And even though the company operates in a highly regulated industry, regulatory requirements associated with AI are not investigated. Altogether, unmitigated, there’s potential to introduce financial or operational challenges to the business. 

Solution: What’s needed: An AI Governance framework that supports organizational strategy and considers the risks of their offering across the AI lifecycle. Responsible AI techniques and tooling should be integrated across MLOps. What’s the outcome? Coherence across analytical teams, reassurance to executives that AI is being developed to expectation, and building customer confidence that using the firm’s services will not introduce new risks to their services.

The Key Takeaways

  • AI Governance frameworks can support organizational priorities and values. Good AI Governance starts at a business initiative level and extends through deployment.
  • Tooling isn’t everything: Setting up the right rules, processes, and requirements is critical.
  • New requirements from governments are evolving and they share aspirations. Waiting for these requirements means losing out on the positive impacts AI Governance can have for your organization. Taking action now puts your organization in a strong position to meet compliance requirements.

You May Also Like

Talking AI Democratization With Dr. Anastassia Lauterbach

Read More

6 Top-of-Mind Topics About AI & Trust in 2024

Read More

3 Concrete Ways to Drive AI ROI

Read More

Alteryx to Dataiku: The Visual Flow

Read More