We have reached a point in the curve of AI adoption where organizations are no longer simply speaking about integrating AI technology into business processes, but rather using AI as a key tool to make business decisions. Organizations’ models come face-to-face with strategic stakeholders day in and day out, so the impact they could potentially have applies more pressure on the models to perform as expected. As the level of AI involvement and responsibility grows so does the complexity and cruciality of proper control mechanisms.
For this reason, AI Governance is a concept that Dataiku puts at the forefront of product and solutions development. In a recent Dataiku Product Days session, Dataiku Governance specialists Paul-Marie Carfantan and Ben Montgomery introduced the topic and discussed how Dataiku Governance solutions help organizations achieve agility and secure value as they scale their AI initiatives. This blog is a high-level recap of their conversation.
It’s All About Balance
Some organizations focus on a seemingly positive feedback loop that is situated in and reinforced by the end of the model lifecycle: models appear to be high performing, promises of business impact appear to be guaranteed, and time-to-market is reduced. However, when critical review and evaluation stages across the AI lifecycle are sacrificed, unintended and negative consequences result from a number of actualized risks with impacts across reputation, operations, and regulatory compliance. If there is limited oversight over the health of a model in production, errors might slip in and reduce the quality of the model prediction. Organizations must strike a balance between agility and control in order to create an impact that is sustainable for AI efforts at scale and doesn’t accentuate the inherent risks of AI adoption. Real, reliable value generation and AI success connect inherently to solid AI Governance.
AI Governance at Every Stage
Dataiku’s approach to AI Governance, underpinned by powerful tooling, enables organizations to introduce oversight and control to the model lifecycle at each and every stage without sacrificing autonomy. The balancing act begins at the ideation/exploration stage, but the pre-production/signoff stage is a significant part of this perspective as well. Before models reach the production stage of their lifecycle with potential automation, you need to have the right input from the right stakeholders and thorough review and sign-off processes for those models. Then, through this sign-off stage and into operation, a loop of continuous monitoring should be in place to ensure that models continue to perform as intended and that any drift is identified and mitigated. It is important to investigate the possibility of even further visibility into the model processes at each stage.
Scaling at Speed While Balancing Control & Autonomy
Here are some key components of AI Governance to consider as safely scale:
- Centralize and Prioritize
- Visualize and track AI initiatives in a centralized place
- Prioritize according to business needs and feasibility (analyzing on a risk-adjusted grid and picking out where investment is important — need to do at an early stage)
- Explain and Qualify
- Qualify each project for risk, value, and feasibility
- Assign business sponsor and sign-off to ensure ownership and understandability
- Continuously assess impact
- Deploy and Monitor
- Define and continuously monitor key metrics in a programmed manner with an open feedback loop
- Protect the production environment to prevent unapproved deployments
- Continuously test and refine
Keep Control With Dataiku
The three key components outlined above are based on years of Dataiku experience and observations of what has helped organizations reach the next level, scaling safely. This picture of governance that accelerates instead of roadblocks innovation is a part of the universal story of systematic Everyday AI that Dataiku champions.
Governance is not a new topic at Dataiku. We have been working for many years on setting the tone when you think about governance to address our client's needs by building key governance and explainability elements into the DNA of our core stack.” - Paul-Marie Carfantan
Moving Towards Seamless Governance Solutions
Expectations and requirements that shape how organizations build and deploy AI can be driven by industry, geography, business or team values, and so on. To ensure Dataiku users can meet these expectations and requirements while reducing time-to-value, we’ve created Govern which facilitates centralization and prioritization, explanation and qualification, and controls over deployment and monitoring without forfeiting team autonomy. In addition to our core offer through Govern, we’re working with and supporting users on meeting niche and demanding requirements.
For example, one of the ways we’re doing this is by evaluating and building solutions that address diverse requirements across the following:
-
AI Pipeline Controls Enforcement
-
Enforce consistent controls and processes on AI pipeline management through customized guardrail
-
- AI Regulation Compliance
- Support preparation for impending regulatory frameworks
- Regulated Industries Compliance
- Accelerate model validation and reduce AI time-to-market while guaranteeing strict compliance with existing industry requirements
- Responsible AI
- Embed a Responsible AI approach through the systematic implementation of checks and mitigation approaches concerned with, bias, fairness, transparency, explainability, and accountability
- Positive Impact AI
- Align AI initiatives and governance practices with corporate sustainable contribution goals and UNSDG frameworks
Putting It All Together
As AI reaches general adoption and organizations find themselves increasingly reliant on trustworthy models for good decision-making, questions of governance (as well as control and explainability) become even more important. For this reason, Dataiku has product features that enable teams to address these concepts in practice. That said, it is critical to ensure that your AI Governance strategy is equally implemented across all levels and teams of your organization. For governance initiatives to succeed, there is a mentality and structure around roles that support collaboration. Teams' goals and objectives need to align. Analytics and AI development teams can focus on rapid innovation but they must also work together with governance teams to make sure that risk is mitigated as they quickly evolve the model processes. This collaboration and coherent protocol of checks and balances in action are what will ultimately allow an organization to scale AI confidently and efficiently.