Benefits of the LLM Mesh, Part 3: Protecting Sensitive Data

Dataiku Product, Scaling AI Lauren Anderson

With the explosion of Generative AI and Large Language Model (LLM) usage rapidly transforming the landscape of AI, so too has emerged increased scrutiny over how these models handle sensitive data. For example, OpenAI is being investigated for violating GDPR, and ChatGPT was even temporarily banned in Italy for the company’s practices around data usage and retention. 

The nature of LLMs means that specific data points can’t simply be “deleted” post-training, and this fact indicates that extra care should be taken when it comes to the intersection of Generative AI and sensitive data. Imagine that Personal Identifiable Information (PII) is accidentally provided in a chatbot response, or company secrets inadvertently get mixed into the outputs of a business application. Companies that use LLMs without the correct measures and policies in place to prevent the input and output of sensitive data and PII risk running afoul of government regulations, opening themselves up to infosec nightmares, hefty fines, or worse. 

Ultimately, the fear of data mishandling has led to 75% of enterprises saying that they don’t plan to use commercial LLMs in production, citing data privacy as the main concern. However, with the incredible gains that these new technologies offer to productivity and innovation, stragglers risk being left behind by the competition if they choose not to adopt them now at such a critical launching point. 

This friction puts IT and data teams in a tough position. How can they both take advantage of Generative AI capabilities through LLMs, while maintaining data security, ensuring reputational integrity, and avoiding unintended harm? 

Understanding LLM Options and Levels of Control Over Data

When it comes to using LLMs, there are currently three main options that companies have to choose from:

1. Free, Public Generative AI Chatbots

Examples such as Google’s Bard, Anthropic’s Claude, or OpenAI’s ChatGPT are most accessible for business users, but using these services opens companies up to immense risk, as employees have unrestricted access to an ungoverned third-party application. In a separate study from the one linked above, nearly 75% of companies are considering or have implemented internal bans against using ChatGPT and other chatbot services, citing data privacy as the main concern. Even so, given the readily-available benefits of these tools, many employees will continue to use these applications regardless of policy unless a viable alternative solution is presented to them, creating a massive Shadow IT problem.

2. Paid LLM Services

OpenAI, Azure, and Google, for example, give you access to deeply trained models with minimal development work. While the policies of these models dictate that they will not capture request information for future model retraining, in most cases there are still security risks associated with users unintentionally sharing sensitive data that shouldn’t leave the company via API calls.

3. Hosting an Open-Source LLM

With the previous two options, the LLM is hosted and served by a third-party provider. The third option is to download an open-source model from a model hub like Hugging Face into your own local environment and either use it as-is, or adapt it for your own specific purposes. While open-source models give you the most control over data since they never leave your ecosystem, there is a significant cost and effort involved in hosting and fine-tuning these models (which can be massive from both a size and resource-consumption perspective, not to mention require specialized expertise). Even with this approach, you want to ensure that the outputs from these models still align with corporate policy, are non-toxic, and don’t output sensitive data.

In an evolving market landscape and regulatory environment, it’s important to have the flexibility to choose both the methodology and models that work best for your organization, while maintaining optimal control over your data. 

While you can create an internal policy that requires employees to refrain from sharing PII and sensitive data with public chatbots, that does not prevent them from doing so either inadvertently or as bad actors. Since many public chatbots retain user queries to create a continuous experience and potentially this data could be used for model retraining purposes, many companies are choosing to develop their own business applications using either option 2 or 3. 

How to Secure Data When Working With LLMs: Enter Dataiku’s LLM Mesh 

To protect companies that choose to both prevent the input of sensitive data into public services as well as prevent unwanted types of output from public or privately hosted LLMs, admins must have mechanisms to both identify and take action around this type of data. The LLM Mesh provides content moderation techniques that can be implemented both for queries and responses, leveraging dedicated content moderation models to filter input/output. Here’s how it can help:

1. Detect PII and Take Action

Leveraging the Presidio analyzer, PII detection can be added at the connection level to filter every query and define an action to take. This can range from preventing the query outright to simply replacing the sensitive data.

PII detection in Dataiku

2. Identify and Filter Forbidden Terms

If there are specific terms that might be problematic for your company, (such as client names or project titles that are currently under an NDA), forbidden terms filters can be added for both queries and responses to remove specific data from input or outputs. Admins can easily configure this at the connection level by referencing the source containing the forbidden terms and an according matching mode. Users can also apply an additional layer of forbidden terms in Prompt Studios, so controls are applied at the prompt layer as well. 

filter forbidden terms in Dataiku

3. Prevent Toxicity

Since LLMs work by predicting the next most likely word in a sequence, they can propagate toxic language or otherwise harmful biases that they have learned during the training process. Allowing employees or customers to view this “toxic” content without any guardrails could seriously impact business. Toxicity detection is key to ensure that the model’s input or output does not contain inappropriate or offensive content, and define appropriate remedies if they do. 

toxicity detection

4. Create Custom Hooks

If your company has a specific policy or compliance restraint that’s a little more involved, you also can develop custom hooks via a java plugin to take action before a query is sent or after a response is received. This allows you to define specific filters and create actions based on those filters, customized to your particular need. 

5. Keep Tabs on Problems With Audit Trails

All queries and responses can be easily tracked with configurable audit trails. This allows you to proactively identify problematic patterns. For example, maybe you keep getting failed queries due to the identification of forbidden terms in a dataset. This may lead to you taking action around that dataset to prevent additional unintended impact, creating an additional layer of security and governance. Not to mention, you can track specific user activity to get a better understanding of who is using which LLM and for what purpose. 

audit trails and log files

You May Also Like

5 New Dataiku Features to Streamline Your RAG Pipelines

Read More

Dataiku Is a Gartner Peer Insights Customers’ Choice

Read More

2025 Retail & CPG Trends: Hyper-Personalization, GenAI, & More!

Read More

Keep Track of All Your Models (Including LLMs) With Dataiku

Read More