As the everyday use of AI grows across many industries, organizations are experiencing a shift in the culture around data-driven decision making. However, the use of AI, like any other technology, comes with a certain amount of risk. Effectively implemented oversight, management, and clear, value-driven organizational priorities are therefore crucial for safe scaling. While the concept of model risk management and a culture of governance is well established in the financial services sector, other industries such as retail and pharmaceuticals can also benefit from shifting their processes and culture around AI Governance.
Managing AI risk can sound overwhelming, especially considering the potential unintended consequences that can crop up during an AI lifecycle. Luckily, there are a growing number of resources to support teams as they build their AI Governance and Responsible AI processes. In their forthcoming book from O’Reilly, “Machine Learning for High-Risk Applications: Techniques for Responsible AI” Patrick Hall, James Curtis, and Parul Pandey provide an overview of both tooling and process-based approaches for Responsible AI in practice. These include understanding compliance frameworks, cybersecurity, and novel ML methods to address bias in the pipeline.
In particular, they highlight the importance of oversight processes both in the design and implementation of AI systems. Such processes include review criteria for each part of the product lifecycle that can ensure AI is aligned with an organization’s value framework. The authors rightly point out that such review processes are in contrast to the ‘move fast and break things’ mentality that has permeated AI development. This mindset is a holdover from the early days of machine learning (ML) when researchers and large institutions were pushing the frontiers of what could be done.
Today, however, more organizations are moving towards Everyday AI whereby a number of high-value but repetitive tasks can be automated. With the permeation of AI across so many industries, many companies are developing internal review boards as well as guidelines for the responsible development of new AI products. At a macro-level, this is a notable shift in the culture of AI, pivoting from an approach that prioritized models’ rapid operationalization to one where safety considerations and management are embedded in scaling.
Given this shift, it is necessary to understand what resources — for technical and non-technical practitioners alike — can make the critical difference in the way organizations safely and responsibly scale AI.
Tooling Makes the Difference
Imagine an organization where data engineers work on SQL pipelines, data analysts use standalone BI tools, data scientists work within individual notebooks, and Ops teams deploy models through custom scripts. Any review committee would struggle with tracing and monitoring AI across these multiple tools for a single product — let alone hundreds of models!
In order to execute safe but efficient scaling of AI Governance, a centralized tool for both development and monitoring of AI is necessary. For example, with Dataiku and the newly released Govern node, a review committee can easily set checks and permissions on the deployment of models and projects. Monitoring is seamlessly integrated with the Dataiku design node, streamlining the oversight of ML development from data pipelines to deployment. Additionally, with the use of promotion criteria, checklists for Responsible AI can be easily integrated into existing workflows for data practitioners.
Setting AI principles and review criteria are crucial in the path to AI Governance but so are the implementation and integration with existing processes in an organization. Clear organizational guidelines for the AI lifecycle are an important start, but as Hall et al. make evident, executing on these principles requires both a cultural shift and the right tooling for all stakeholders.