Walk the Talk: Moving From Principles to Action to Implement AI Governance at Scale

Scaling AI Jacob Beswick, Triveni Gandhi

At the global level, a lot of work is being done to pivot from the years’ worth of principles-based discussions around AI Governance and Responsible AI into practical approaches. Actionability is critical for principles-based discussions to have a material impact on organizations, their employees, and end users. And so, in this blog, we’ll cover at a high level what we see as the first part of moving from principles to practice: taking principles and translating them into actionable criteria.

Download Now: How to Safely Scale AI With Oversight

We’ve written about principles-based frameworks and the trend towards practice in the past. From our perspective, key leaders in the space include the USA’s NIST AI Risk Management Framework (AI RMF), Singapore’s AI Verify, and Germany’s VDE SPEC. Each crafts a theoretical landscape in terms of values and populates this landscape with practical approaches to meet higher order goals. Others, like the European Union’s proposed AI Act and the USA’s Blueprint for an AI Bill of Rights, sit a little outside of this group insofar as the articulation of ‘what to do’ and ‘how to do it’ is more limited.

Recognizing harmonization of ambitions across frameworks, NIST’s AI RMF has even put together a limited comparison:

Source: NIST AI RMF v2

But if we really take a step back and look at all these frameworks, they are all focused on the same thing: principles that set out an ambition for the ethical development of AI and ideas of what practices are needed to ensure that ambition is realized. This is something we’ve prioritized at Dataiku, starting with our unified approach to AI Governance, Responsible AI, and MLOps. Navigating the relationships between these domains has meant introducing high-level ethical concepts to our stakeholders, all while grounding discussions in terms of what next. This has meant working towards a well-articulated set of rules, processes, and requirements implemented through effectively managed and monitored MLOps . 

How We Define AI Governance, Responsible AI, and MLOps:

An AI Governance framework enforces organizational priorities through standardized rules, processes, and requirements that shape how AI is designed, developed, and deployed.

Responsible AI focuses on aligning AI with an organization’s values by proactively checking for and mitigating concerns around reliability, accountability, fairness, and transparency.

MLOps ensures that the established processes and frameworks are made operational through the entire AI lifecycle, including monitoring and continuous improvement.

Now, if moving from principles to practices sounds easier said than done, it’s because it is. However, it is not impossible, and we are seeing more and more clients take concrete steps to move forward from frameworks to actionable programs to safely scaling AI — even without having created a perfect system from the get-go. In fact, with the number of available frameworks today, many of our clients are finding it simpler to take one from any of the major players — NIST, EU, VDE, RAII, etc. — and adopt it to their needs which allows them to make progress on their governance goals even while continuing to iterate.  

Regardless of the framework you choose to implement, there are key steps needed to build a practice for responsible and governed AI at scale starting with aligning corporate values to expectations for AI and analytics, determining measurable outcomes, and implementing new ways of working. Below is an overview of the process we’ve developed for our clients, which takes inspiration from existing standards. 

Choose, Iterate on, or Build Your Value Framework:

  1. Use your organization's existing statement of corporate values or principles to frame the use of AI and analytics.
  2. Reflect on what outcomes you want to ensure for your staff and end users where AI and analytics are used.
  3. Depending on your maturity, location, and regulated status, determine if these values for AI are sufficient to cover compliance needs or if those requirements should be folded in. 
  4. Leverage an external framework as-is or refine according to your organization’s priorities and compliance requirements.

Make Principles Actionable:

5. Identify relevant colleagues to translate the principles you’ve selected into actionable criteria. The right colleagues will be context-specific but could include, amongst others, stakeholders representing teams across core business, data science, customer-facing, ethical or responsible practices (if they exist), or compliance.

If a principle is Fairness. Consider what criteria (there can be more than one!) would represent that this principle is being meaningfully realized. This could include something like Assess biases during model development.

6. Work with your stakeholders to agree to measurable indicators that are the means of delivering on criteria.You should choose multiple indicators that represent different parts of the AI/ML lifecycle from design to deployment and monitoring.

The criterion — assessing biases during model development — could be actioned in a number of ways. Working with the right stakeholders to identify best practices for your organization can help to systematize the implementation of the value and its criteria. So, for instance, measurable indicators for this criterion could include the identification and assessment of sensitive data attributes.

7. Map indicators to processes and steps in the AI development cycle (from ideation to deployment).

Implement New Practices:

8. Develop requirements and checklists that encourage critical analysis and documentation of systems.

9. Choose a use case to test this framework on, and learn from what does and does not work. 

It should be clear by now, but the key variable of interest in this blog is practice. Building out ethical frameworks, populated by values and principles, is the right start. But the biggest issue we see with clients is a fear of starting — that until the most perfect and comprehensive framework is built we cannot even begin testing a protocol. This creates a lot of lag time that could be avoided. 

Instead of expecting yourself to build the most perfect RAI framework at the start, why not start with the basics and improve from there? Be honest and transparent about the experimental nature and the fact that you’re building something from the ground up. Reflect and learn, and then do it better the next time around. When all is said and done, there never will be a perfect RAI framework or implementation — but something that is more proactive than reactive, and allows you to grow and iterate from it each time.

You May Also Like

Explainable AI in Practice (In Plain English!)

Read More

Democratizing Access to AI: SLB and Deloitte

Read More

Secure and Scalable Enterprise AI: TitanML & the Dataiku LLM Mesh

Read More

Revolutionizing Renault: AI's Impact on Supply Chain Efficiency

Read More