Industry Best Practices for AI Governance

Dataiku Product Joy Looney

What we know for sure is that companies today face loads of pressure to quickly adopt AI technology. Vikram Mahidhar and Thomas H. Davenport of Harvard Business Review even go so far as to say that, “Companies that wait to adopt AI may never catch up.” Understanding this pressure that organizations face in this hastened race to integrate AI, we know that companies face the temptation to simply avoid AI adoption due to perceived risks, which are ultimately to the detriment of both consumers and the organizations.

Accordingly, contemporary machine learning (ML) and model risk management is a very popular topic in the AI community right now, so Dataiku was excited to sit down with Patrick Hall, bnh.ai Principal Scientist and an author for O’Reilly’s Machine Learning for High-Risk Applications ebook, in a recent Product Days session to chat about AI Governance challenges and explore his perspective on industry best practices.

→ Find the Full Presentation Here

What’s the Situation? 

The reality is that there are thousands of publicly reported AI incidents, with varying degrees of consequence, extending all the way to disastrous. While you might assume that it is only the careless companies desperate for recognition that fall victim to risks of ML and models, that assumption is actually incorrect. With respect to the type, age, or size of the company, AI risks are not discriminating: even the most mature, experienced companies suffer ramifications of unchecked risk. In fact, repeat AI incidents are occurring, under the scrutiny of the public eye, for many large organizations. 

Why Is This Happening and What Do We Do? 

The answer is not black and white, but there are some key observations that help us understand why this is happening. When we think of ML and models that use data reflective of the real world, those models may, if unchecked, risk repeating and perpetuating harmful biases or discrimination captured by the data.   On numerous occasions, we see where these harms - bias, unfairness, or discrimination - have a concentrated impact within marginalized groups (e.g., bias and opacity in credit scoring, employment screening systems, and healthcare resource allocation), and it is when organizations ignore or avoid these risks, instead of anticipating and controlling them, that the risks become fundamentally worse. Turning a blind eye does not make the problem disappear. 

As an example of how organizations are handling this phenomenon, we can look to banks where model risk management is deployed as a risk mitigation practice. 

→ More On Model Risk Management in Banking

Understanding Model Risk Management 

Let’s go further with Patrick Hall’s outlook on model risk management. As we highlighted above, model risk cannot be eliminated, as the risks are intrinsic to the process, so the focus must shift to controlling, managing, and mitigating the risks rather than avoiding them. There are several key components of an effective risk management strategy that this shift includes: 

  • Sound development, implementation, and use of models
  • Rigorous and thorough model validation (adversarial testing)
  • Governance and control mechanisms (board and senior management oversight, policies and procedures, controls and compliance, appropriate incentive and organizational structure)

As a starting point, organizations should turn to the measurement of materiality. Materiality is the probability that an AI incident will occur multiplied by the cost of that incident should it occur, and this measurement is a key determining factor for risk tiering.  

Risk tiering is the next necessary step for risk control. Because organizations don’t have inexhaustible resources, they should direct their available resources to the highest risk areas. Additionally, organizations should turn to the guiding principle of model risk — effective challenge. Effective challenge depends on a combination of incentives, competence, and influence. This means that a critical analysis of models must occur by informed parties who are capable of identifying model limitations and assumptions and then producing the appropriate and needed changes.

Finally, effective challenge must be supported by overall corporate culture in addition to the incorporation of proper compensation practices. Model risk management roles benefit from technical knowledge and modeling skills. They need explicit authority and support from higher management levels, and people need the right incentives to play their purposeful roles. For this to work, organizations need to devise clear roles and structures surrounding responsibilities so that communication is always clear. Once this is all under control, organizations can turn to the three lines of defense:

Caution: Model Risk Management Isn’t Perfect

While having model risk management in place is obviously preferable to the risk-ridden alternative, organizations should be aware that it is not a magic solution. It can be heavy-handed and slow things down but, keep in mind, incidents will do so the same with lasting impact. Furthermore, small organizations may find it difficult to develop and maintain model risk management as they need additional personnel and resources. On the other hand, for organizations that do have adequate resources and teams in place, it can give a false sense of security that leads to a slippery slope of detachment. 

How to Successfully and Responsibly Guide AI 

With these caveats in mind, many organizations will determine it is still in their best interest and even necessary to implement risk management strategies. According to Patrick Hall, some common mistakes that they should avoid moving forward include: 

  • Using governance or risk management for marketing purposes without actual risk mitigation 
  • Lacking the fundamental resources and understanding of model risk from the start
  • Not having a strong organizational position or understanding
  • Misaligning or lacking needed incentives for risk mitigation roles 
  • Overlooking or excluding pre-existing oversight mechanisms and personnel (i.e., audit, legal, risk, security)

Moving on from what not to do, Patrick Hall highlights important steps for organizations crafting their model governance frameworks:  

  • Carve out simple policies about the ML that is being implemented.
  • Ensure traceability of intention through simple documentation throughout model processes.
  • Create a fully-fleshed response plan before beginning risk management. 
  • Approach and incorporate third parties with caution as they naturally contribute to higher risk levels. 
  • Change your mindset. Know that ML can hurt people and people can hurt your ML (e.g., hacking), and act accordingly. Prioritize safety then legality and then performance quality. Make a full and conscious effort to promote transparency, fairness, privacy, and security at all levels of your organization and throughout every stage of your business processes. 

There you have it — some of today’s industry best practices for AI Governance. Keep in mind, this topic will continue to develop in months and years to come as AI technology, its inherent risks, and our relationships to technology naturally evolve.

You May Also Like

AI Isn't Taking Over, It's Augmenting Decision-Making

Read More

Maximize GenAI Impact in 2025 With Strategy and Spend Tips

Read More

Looking Ahead: AI Hurdles IT Leaders Need to Overcome in 2025

Read More

No-Code ML and GenAI With Dataiku and Fabric

Read More