Introducing LLM Cost Guard, the Newest Addition to the LLM Mesh

Dataiku Product, Scaling AI, Featured Lauren Anderson

When it comes to Large Language Models (LLMs), many IT team leaders are trying to get a handle on the question “How much will it all cost?” 

With LLM experimentation reaching a new normal, many teams can tell stories of being hit with unanticipated bills as they begin to develop policies and internal procedures around usage. As more and more teams, use cases, and applications emerge across the enterprise, IT teams need a way to easily monitor both performance and cost to ensure that companies are getting the most out of their investments and identifying problematic patterns of usage before they have a huge impact on the bottom line.  

Costs Associated With LLM Usage

In general, costs associated with LLMs can be categorized based on whether your organization:

  1. Accesses commercial models via APIs from providers such as OpenAI, Meta, or Mistral or 
  2. Self-hosts open-source models such as those downloaded from a hub like HuggingFace 

Accessing Commercial Models via APIs 

Part of the reason costs can vary so greatly is that there’s no standardized pricing between commercial models. As data teams look to experiment with different models to gauge optimal performance, they may not be paying as close attention to the financial impact. 

Pricing for these types of models is typically based on tokens, which is roughly aligned with the number of words in both the queries and responses. Put simply, you pay for each word you feed into the LLM and every word the LLM gives you back.

Self-Hosted LLMs

Self-hosted LLMs are sometimes perceived as a less costly alternative. However, by hosting a model on your own infrastructure you end up paying directly for server costs and the cost of highly skilled people to customize the models for your specific purposes and update them over time. Not to mention the cost of GPUs and other hardware components at scale. 

LLM Cost Guard for blog featured image

How the Dataiku LLM Cost Guard & LLM Mesh Can Help 

With all the different costs associated with LLM usage, IT teams and admins need a way to easily monitor LLM cost and performance so that they can proactively anticipate and manage financial impact. LLM Cost Guard provides a means for teams to oversee and control costs (by application, services, users, or projects) and diagnose issues. 

Audit Logs

As we alluded to in our last post, Dataiku offers comprehensive audit logs that enable admins to easily pinpoint who is doing what. LLM Cost Guard offers a fully auditable log of who is using which LLM and which service for which purpose allows for both cost tracking (and internal re-billing), as well as the full traceability of both requests and responses to these sometimes unpredictable models. 

log files in Dataiku

Performance Monitoring

Dataiku LLM Cost Guard includes a pre-built performance monitoring dashboard that helps admins track usage and costs for LLM services and providers. This way, teams can diagnose issues and select the optimal service based on application needs. 

Caching 

The option to cache responses to common queries with the LLM Mesh avoids the need to regenerate the response, offering both cost savings and a performance boost. In addition, if self-hosting you have the power to cache local hugging face models to reduce the cost to your infrastructure. 

_Include Cost in Your LLMOps Strategy (1)

Cost and Resource Usage Reporting

Explore cost monitoring and usage reporting in LLM Cost Guard with an easy-to-read dashboard so that you can proactively identify where you’re spending the most. The cost estimate is based on a sample of completed prompts and responses, applied to the publicly available ‘sticker’ price of the hosted service. With the ability to toggle by user, connection, LLM type, cache status, context type, project key, and more, you can easily diagnose issues to respond proactively. 

All of these features mean that IT teams can get a handle on LLMOps and set themselves up for success, to ensure optimal cost and performance. As you’ve seen, regardless of what models, methods, or technical approaches you choose for each project, Dataiku’s LLM Mesh serves as the layer that is common to all of your applications, giving you the ability to safely and efficiently deliver these solutions in enterprise settings.

You May Also Like

How to Build Tailored Enterprise Chatbots at Scale

Read More

Operationalizing Data Quality: The Key to Successful Modern Analytics

Read More

Alteryx to Dataiku: AutoML

Read More

Conquering the Data Deluge Through Streamlined Data Access

Read More