As organizations strive to scale their artificial intelligence (AI) initiatives, they also face the challenge of maintaining oversight. This might be to ensure compliance with business requirements, but also increasing industry and AI regulations. At Everyday AI New York Tech Day 2023, Dataiku’s Kirsten Hoogenakker discussed achieving successful AI Governance. Watch the full session from Everyday AI NYC on AI Governance, or read on for the highlights.
Roadblocks to Successful AI Governance
Every company wants to be able to scale AI systems and AI models successfully. But it's not just about AI technology and applications based on machine learning algorithms. Even democratizing data, which necessitates breaking down silos and providing self-service frameworks, requires good governance. Ultimately, the goal is creating or generating value from massive amounts of data.
But today, it’s not just about creating value — it’s also about reducing cost and risk management. These are huge tasks to accomplish and extremely difficult goals to scale in the modern organization because:
- Organizations lack standard processes. Every line of business might have its own unique working process for getting data projects into production.
- There tends to be low traceability. Let’s face it — no one really loves documenting their work. However, if you don't have the documentation for data pipelines, you don't have the audit trails that you need to be successful.
- IT organizations lack oversight. You probably have different lines of business and teams working on data or AI projects. Without a central watchtower to monitor projects, you don't know what's coming up or what’s already in production.
Dataiku can provide a way through some of these challenges with effective AI Governance. From governance processes to monitoring project progress, keeping stakeholders in the loop and identifying potential risks, Dataiku is the key to safely and efficiently delivering AI at scale.
Get a full demo of Dataiku’s AI Governance feature in the full Everyday AI session:
What Is AI Governance, Anyway?
But we’re getting ahead of ourselves, so let’s take a step back. People often use the terms Responsible AI and AI Governance interchangeably. So what exactly is AI Governance, anyway?
At Dataiku, we believe an analytics and AI Governance framework enforces organizational priorities through standardized rules, processes, and requirements. These priorities then determine the design, development, and deployment of analytics and AI. That means AI Governance, MLOps, and Responsible AI, while intrinsically linked, are distinct.
Get Started With AI Governance
“Responsible AI is the starting point to being able to build out what's important to your organization. If you don't have a grasp on what's important or why you are doing it, it's really hard to create a framework or a workflow that allows you to accomplish your goals.”
— Kirsten Hoogenakker, Solutions Engineer at Dataiku
- Do you have a way to look for bias in your model?
- What are your organization's ethics principles?
- Do you have a clear process for defining who needs to approve which portions of your pipeline?
- Are you able to monitor those models once they're in production?
- And what sorts of bounds are going to be important from an organizational standpoint?
No organization will answer these questions in the same way, which is why no AI Governance framework will be exactly the same. But these questions are a good place to start building a model AI governance framework.
From there, these questions feed into the MLOps piece. Those frameworks and processes will cover the day to day. In other words, how you're actually automating these models or where you're collecting things like data drift.
All this, in turn, boils up to a governance framework. This governance framework allows the organization to work through compliance risks, audit logging, governance issues, etc. In other words, building what's important based on what's happening on the ground.
AI Governance & Generative AI
The future of AI Governance is around Generative AI and large language models (LLMs). What we’re seeing at Dataiku is Generative AI pushing organizations to think about governance in an expedited way. More than ever before, organizations want to make sure they’re building trust in AI. And trust comes through transparency, documentation, and ensuring that AI policies keep up with this constantly evolving landscape.
Of course, ensuring that we govern applications built on top of LLMs in the classic sense is important. That means there are the right approvers in place, there is the right oversight over the pipeline, etc. But even more important for Generative AI is having a human in the loop.
When it comes to putting a human in the loop for Generative AI and LLM applications, you need to decide what makes sense. For something like a credit card fraud project that is high risk, it’s probably critical. For other projects, maybe less so.
The bottom line is, with Dataiku as your partner, you're set up with the right tools and strategies — Generative AI or not — to navigate the complex landscape successfully. Balancing oversight with agility goes from a challenge to opportunity for potential with Dataiku.