4 Responsible AI Myths Every Data Leader Should Understand

Dataiku Product, Scaling AI Lynn Heidmann

The rise of Generative AI has placed a renewed interest on Responsible AI. Why? Previous years’ developments in the enterprise AI space mean that going from producing one model a year to thousands is well within the average company’s reach, and operationalization has made it possible for a single model to impact millions of decisions (as well as people). 

→ Get the Ebook: Building Responsible Generative AI Applications

This increase in the use of AI can present several new risks:

  1. AI systems can behave in unexpected and inadequate ways in a production environment (as compared to the original design or intent).
  2. Models may reproduce or amplify biases contained in data.
  3. More automation might mean fewer opportunities for detecting and correcting mistakes or unfair outcomes.

Despite the increase in the amount of models in production, only a few companies have dedicated the time and effort to ensure these models are deployed responsibly. At the same time, this isn’t surprising given the context — there is no standard, accepted definition around what exactly Responsible AI means or what companies need to do to practice it (though consultants, tech behemoths like Google, and organizations like Partnership on AI are starting to develop guidelines, they are far from common practice).

Responsible AI is the practice of designing, building, and deploying AI in a manner that empowers people and businesses, and fairly impacts customers and society — allowing companies to engender trust and scale AI with confidence. This article highlights four common myths about Responsible AI to bring some clarity about what it does (and doesn’t) mean for businesses. 

Myth #1: The Journey to Responsible AI Ends With the Definition of AI Ethics

One of the biggest misconceptions about Responsible AI is that it is just about defining AI ethics principles. However, Responsible AI is much wider and more nuanced, including two main dimensions:

intentionality vs accountability

Defining AI ethics is a cornerstone for Responsible AI, and it’s no easy feat. The goal is to fully align deliveries with intentions, ensure outputs are fit for purpose, and ensure resilience across all dimensions to support AI initiatives.

When defining ethics, it’s important not to confuse them with biases. Ethics are a set of principles, and bias is just one dimension on which these principles should be applied. For example, a machine learning (ML) model can be completely unbiased while also being unethical or, on the other hand, ethical but biased (e.g., the risk of gender bias might not pose an ethical problem for a retail clothing recommendation engine, but it’s another story for financial institutions). 

Myth #2: Responsible AI Challenges Can Be Solved With a Tools-Only Approach

Ah, the classic technology-is-a-magic-bullet myth. There’s good news and bad here — the good news is that tools can help organizations execute on a Responsible AI strategy. On the accountability front, good MLOps and governance practices are a reasonable start (MLOps is the standardization and streamlining of ML lifecycle management). But it’s not a free lunch; both MLOps and governance take effort, discipline, and time. 

Those responsible for governance and MLOps initiatives must manage the inherent tension between different user profiles, striking a balance between getting the job done efficiently and protecting against all possible threats. This balance can be found by assessing the specific risk of each project and matching the governance process to that risk level. There are several dimensions to consider when assessing risk, including:

  • The audience for the model
  • The lifetime of the model and its outcomes
  • The impact of the outcome

This assessment should determine not only the governance measures applied, but it should also drive the complete MLOps development and deployment tool chain. 

Part of the equation on ensuring accountability with AI is tooling as well. For example:

  • Traditional statistical approaches can aid teams in uncovering bias in data, as can subpopulation analysis, since an ML model is fair toward a specific metric (though it’s important to note that no model will perform consistently across all populations for all metrics).
  • Other tools like those that help manage data drift or individual prediction explanations (the ability to see down to the individual what factors weighed into a particular prediction or outcome) can be useful.

From an overarching standpoint, data science, ML, & AI platforms (like Dataiku) not only offer many of these smaller features, but they also generally help ensure models are white-box and transparent. BUT — and this is a big but — tools are not the be-all and end-all when it comes to Responsible AI. Tools only exist to support efficient implementation of the processes and principles defined by the people within a company.

In other words, it’s the human that needs to ask why a model made a decision, make the choice to audit how the model changes with different data, and ask the right questions to ensure it meets the organization’s standards. Because ultimately, the notion of fairness can be translated into literally dozens of mathematical definitions that are not only different but also sometimes incompatible. 

Once a definition is chosen, there are tools to measure and mitigate biases (as illustrated earlier in this section), but it's still up to organizations themselves to choose the definition that best aligns with their values and is adequate for a specific use case.  

Myth #3: Problems Only Happen Due to Malice or Incompetence

In recent years, issues related to Responsible AI have arisen from nearly all of the big technology players, from Google to Amazon and more — and those are just the blunders that have been made public. These incidents provide two important lessons:

  1. If it can happen to Google, Apple, Facebook, Amazon (GAFA) with their technical expertise, enormous payroll, and virtually unlimited resources, it can happen to any organization. What most companies don’t have is the ability to withstand such a crisis, which is why every enterprise should be invested in Responsible AI.
  2. For the most part, these high-profile mistakes (for which the organizations pay a major reputational price) have not been caused by malicious or incompetent staff but rather by lack of intentionality and accountability at various stages of the ML project lifecycle.

Indeed, the risk of creating an AI system with fundamental underlying issues that put the company at risk is complex and can happen quite by accident, which is part of the reason creating robust Responsible AI systems and processes is such a challenge. 

Myth #4: Responsible AI Is a Problem for AI Specialists Only

There is a feeling that Responsible AI shouldn’t be the responsibility of just one person, and perhaps that’s true — a single individual can’t possibly keep an eye on all processes and decisions across the organization, especially as the company scales in its ability to deliver ML models. Generally, none of the stakeholders involved in a data science, ML, or AI project has a comprehensive perspective:

  • Domain experts are the only ones who truly understand the underlying business problem, the data collection process, and the limitations of the data.
  • Data professionals (including data engineers, data scientists, and ML engineers) are the only ones who understand the technical data science aspects.
  • Decision makers are the only ones with the power to make judgment calls when the regulatory, business, or reputational stakes are high.

However, at the same time, Responsible AI needs to move from an abstract concept into something that can be more concretely quantified and owned by individuals; controlled by the hands of humans. The following rubric outlines how each component of the data workflow — the data itself, technology, and resulting models — can be cross-checked to ensure governability, sustainability, and accountability:

accountability, sustainability, intentionality

Conclusion

Responsible AI is an enormous topic for organizations to tackle, both when it comes to the theory (what is it, and how should it be implemented?) and the practice (how — and to what extent — can technology help?), not to mention the potential consequences. This article has only scratched the surface when it comes to considerations for organizations who want to become leaders in this space.

Ultimately, there are two ways to look at Responsible AI:

The Stick

What do we need to implement in order to protect the company from risk when it comes to Responsible AI?

The Carrot

How can we create trust using Responsible AI practices in order to embrace its ability to transform the company?

The reality is that it will take both approaches (plus a collaborative effort from everyone, not just data practitioners or just executives) in order to succeed. It’s also important for organizations to recognize that Responsible AI is just a small part of a larger question of sustainability around AI efforts. That is, establishing the continued reliability of AI-augmented processes in their operation as well as execution.

This includes investing in infrastructure that is built for the future; reliable, continuous access to data used to support current (or future) data projects; and a centralized place to monitor models in production with people accountable to their maintenance. To end on a positive note, know that Responsible AI is not an impossible challenge. With the right mix of openness, internal collaboration, and step-by-step framework construction, it is largely possible to tackle this challenge while scaling with AI. That is, ensuring the very legitimate surge toward Responsible AI does not become a blocker for the much-needed embedding of AI throughout company processes (shameless plug: Dataiku is in a great position to help!).

The footnote in the first chart links to this Forbes article.

You May Also Like

AI Isn't Taking Over, It's Augmenting Decision-Making

Read More

The Ultimate Test of ChatGPT

Read More

Maximize GenAI Impact in 2025 With Strategy and Spend Tips

Read More

Maximizing Text Generation Techniques

Read More