Making Enterprise Generative AI Safe & Responsible

Dataiku Product, Scaling AI Triveni Gandhi

There's a lot of recent excitement around Generative AI — in particular Large Language Models (LLMs). This means business leaders are pushing forward with the use of AI at an unprecedented pace.

On the other hand, it's also true that flaws within artificial intelligence systems can present real risks. This was true both about AI before Generative AI as well as today. That means now more than ever, we need to think about building AI-powered systems in a responsible and governed manner.

Building responsible Generative AI sounds great in theory, but what does that mean in practice?

→ Get the Ebook: Building Responsible Generative AI Applications

It’s important for every organization to have a set of guardrails defined for the use of AI. This is true whether your business is:

  • Scaling the use of AI across the organization.
  • Experimenting with the latest developments in Generative AI models.
  • Concerned with or looking to make sense of forthcoming regulation.

Various standards organizations and governments have proposed frameworks for AI values and risk management — these are a good starting point. However, feedback from our customers tell us there's still a need for a more specific, robust, and tested framework.

Mitigating Generative AI Risks In Practice With Dataiku

To support the scale of safe AI, Dataiku offers built-in features for responsible design, deployment (MLOps), and governance. Last month, we released a full, publicly available Responsible AI training“Responsible AI in Practice” is available now on Dataiku Academy

But that’s not all — we're going beyond the features and the basics of reliability, accountability, fairness, and transparency. We're doing this with the development of our very own RAFT (Responsible, Accountable, Fair, and Transparent) framework. The values outlined in the RAFT framework are crucial for the development of AI and analytics. Plus, they cover both traditional and new methods in Generative AI.

RAFT (Responsible, Accountable, Fair, and Transparent) framework for responsible AI
The basics of the RAFT framework for Responsible AI, and the building blocks for the full framework,  are:

  1. AI systems that are reliable and secure. AI development happens with consistency and reliability in mind across the entire lifecycle. Data and models are secure and privacy-enhancing.
  2. AI systems that are accountable and governed. There is documentation for ownership over each aspect of the AI lifecycle. People use this documentation to support oversight and control mechanisms.
  3. AI systems that are fair and human-centric. That is, the people building AI systems are thinking about minimizing bias against individuals or groups. They are also thinking about supporting human determination and choice.
  4. AI systems that are transparent and explainable. End users are aware when organizations are using AI. On top of that, the company provides explanations for the methods, parameters, and data used.

 

Introducing the Full RAFT Framework

The RAFT framework builds upon this baseline set of values for safe AI. The goal is for it to serve as a starting point for your organization’s own indicators for Responsible AI. 

Of course, as with any framework, we encourage governance, ethics, and compliance teams to adapt it. Teams might make changes for specific industry requirements, local regulations or additional business values.

Don’t forget: It’s critical to apply the principles from the RAFT framework to all models. This includes both traditional and generative models used in an AI system.

For example, take the LLM-Enhanced Demand Forecast example in the Dataiku Generative AI Use Case Collection. The project uses an LLM to support comments generation based on outputs from a demand forecast machine learning model. In other words, it should have different Responsible AI considerations for the two different models.

It’s also worth noting that specific methods to assess and manage the bias of Generative AI are still in development. When building or fine tuning a model, developers should use diverse and representative datasets. They should also check model performance against risks like polarity, toxicity, or other unfair behavior against sensitive groups.

Other Responsible AI Considerations

The RAFT framework is a good basis for developing Responsible AI (generative or not). However, it’s not the be-all and end-all to ensuring positive impact of AI systems. It’s also important to consider:

The Impact of AI Once It’s In Use

Dataiku recommends risk scoring for unintended consequences based on two variables:

  1. Whether the risk could materialize as a harm to individuals and groups directly. Direct harm might be because of the solution’s implementation. Or, alternatively,  indirectly because of some constellation of factors that are difficult to qualify at the time of deployment.
  2. Whether the risk could materialize as a harm immediately or over a longer period of time.

Specific Risks of Generative AI by Context

Generative AI comes with many inherent, high-level risks. However, businesses should also consider additional context to understand potential additional risks.

  1. A baseline approach is to assess the use case across two dimensions:
  2. Target of analysis. This dimension focuses on the type of data or documents that the model will make use of to generate output. 
    Delivery method. In other words, looking at how the output of a model is delivered to end users.

    Each category within these two dimensions will carry different risk tradeoffs. With those tradeoffs come strategies to prevent harm to the business, clients, and broader society.

    It’s Just the Beginning for Responsible Generative AI

It’s only the beginning for Generative AI, which means we’re only at the beginning of our understanding. Both in terms of the opportunity as well as the risks it presents. Considering responsible development and deployment now while applications are nascent is a good way to mitigate these risks.

You May Also Like

Digital + Data/AI Transformation: Parallels and Pitfalls

Read More

Stay Ahead of the Curve for GenAI Regulation in FSI

Read More

Taking the Wheel Back With Dataiku's Model Override Feature

Read More

I Have GCP, Why Do I Need Dataiku?

Read More