Fine-Tune Your Generative AI Models With LLM Fine-Tuning in Dataiku

Dataiku Product, Scaling AI Chad Kwiwon Covin

We all know the power of Generative AI (GenAI), but customizing large language models (LLMs) for specific tasks has transformed how businesses operate. It has allowed teams to address unique challenges with speed and precision never before seen. The easiest way to customize your LLM experience is prompt engineering, which allows users to effectively optimize prompts sent to LLMs to achieve better response performance.  

Another one way GenAI can be customized is Retrieval-Augmented Generation (RAG), which has set new standards for searching and querying documents by allowing LLMs to pull in the latest information from organizational knowledge bases. For example, customer service teams can use RAG-based applications to quickly find and provide the most up-to-date information to customers, while manufacturing teams can leverage it to search the latest industry reports and optimize production processes.

But what if you need your LLM to do more? What if it needs to go beyond providing up-to-date information, and needs to adopt a particular line of reasoning, style, or tone? Or operate on a type of data that a foundational model has likely never been exposed to? This is where fine-tuning comes in.

Why Fine-Tuning Is a Game Changer

What’s fine-tuning exactly? Fine-tuning is a technique in machine learning and AI used to adapt a pre-trained model to perform better on a specific task and/or domain. Use cases that require consistent and highly specialized outputs are perfect for fine-tuned LLMs. Here are a few benefits to fine-tuning your LLM for a given use case:

Tailor to Your Organization’s Unique Needs and Objectives:  First, LLM fine-tuning allows businesses to customize LLMs to align with their specific processes and goals. Whether it is a legal firm that requires precise legal vocabulary handling or a healthcare provider that needs accurate medical text interpretation, fine-tuning adapts the model to excel in specialized tasks.

Enhance Overall User Experience With Improved Accuracy and Consistency: Fine-tuning an LLM on a specific dataset tailors the outputs but also ensures more accurate and consistent responses. This is important in domains like customer support, where consistently providing clear and accurate information builds trust and enhances user satisfaction.

Better Control of LLMs With Less Risk: Fine-tuning gives organizations better control over their LLMs, reducing the risk of hallucinations and inconsistencies. By customizing the model’s behavior to specific tasks, businesses can mitigate risks associated with generic, out-of-the-box models, ensuring safer and more reliable tailored outputs.

Fine-Tuning in Dataiku: Accessibility and Efficiency

In Dataiku, both non-technical users and experienced data scientists can now leverage LLM fine-tuning. There is both a visual approach and a code-based approach, ensuring the benefits of fine-tuned LLMs are accessible across the organization. Both methods allow fine-tuned models to be saved to the LLM Mesh and have all the same control and governance as other connected LLMs. 

Visual, Managed Recipe Method

Starting with the visual fine-tuning recipe, you can fine-tune Hugging Face local models and hosted models from service providers like OpenAI (Azure OpenAI and AWS Bedrock support coming soon). The recipe only requires an LLM registered in your LLM Mesh and a training dataset, with an optional validation dataset.

In the recipe interface, the Hyperparameters section is in “Auto” mode by default,  but can be turned to “Explicit” for manual adjustments.

In the recipe interface, the Hyperparameters section is in “Auto” mode by default but can be turned to "Explicit" for manual adjustments.

This method is perfect for non-coders and quick testing using fine-tuned LLMs. It simplifies the process and requires no code to tune your model to your liking. For more technical users, the managed recipe can be used for rapid iteration and testing, allowing for experiments with fine-tuned LLMs on various use cases.

Python, Code-Based Method

Data scientists and technical users also have the option to fine-tune LLMs in a Python recipe. This approach provides coders the flexibility to try more advanced configurations for fine-tuning HuggingFace local models while still leveraging the secure and managed experience of Dataiku’s LLM Mesh.

An example of fine-tuning a HuggingFace model through code,  and importing it to a Saved Model.

An example of fine-tuning a HuggingFace model through code and importing it to a Saved Model.

In short, this method ensures complete customizability, allowing you to choose the LLM's weights and configurations and fine-tune the model exactly to fit your needs. Additionally, saving the fine-tuned LLM to the LLM Mesh ensures control and audibility, maintaining a clear record of activity and enabling secure management of the model.

With these two powerful methods for fine-tuning, tweaking your LLMs has never been easier — but knowing when to fine-tune is just as important as knowing how. The question is, when should we use fine-tuning compared to another approach like RAG?

When to Use Fine-Tuning Instead of RAG

For even some of the most complex use cases, RAG can be the most effective approach to get your desired output. RAG is ideal for scenarios where adaptability is key. If you want to access new or dynamic data, RAG allows for a simple pipeline to include the latest information quickly. Fine-tuning is more static and depends on the dataset provided for training rather than on information retrieval. 

However, fine-tuning is best suited for scenarios that require consistent and highly specialized performance. It allows the LLM to have a high level of specialization, whereas RAG is limited in its customization. Even using RAG with the most intense prompt engineering may not consistently give the desired output like fine-tuning will. What might this look like in the real world?

For example, a team at a medical institution may want to retrieve relevant information daily regarding patient care, insurance information, etc. RAG, here, would clearly be the better approach. The most up-to-date research, clinical guidelines, insurance policies, and patient records could be made accessible through an in-system chatbot, giving healthcare professionals visibility like never before. However, if the focus is to generate potential dosing regimens for a clinical trial based on nurse charting updates, fine-tuning an LLM may make more sense. LLM fine-tuned on a large corpus of dosing regimens and texts of previous trials would allow for a higher degree of specialization and precision. This would ensure the generated outputs would more closely follow historical protocols and guidelines, giving the healthcare professionals a place to start with their treatment. 

In summary, RAG is the go-to method when you need your LLM outputs to be flexible and adaptable to the latest information. On the other hand, fine-tuning is the preferred approach when you need your LLM to excel in specific, specialized tasks, offering precision and consistency tailored to your project’s unique needs.

Fine-Tuning LLMs Safely and Efficiently in Dataiku

Organizations can finally fine-tune their LLMs safely and efficiently through Dataiku. Using the new visual fine-tuning recipe, analysts and data scientists alike can quickly create a fine-tuned model from saved HuggingFace models or market-leading hosted models like GPT-4o. This newly created model can be shared among projects and be audited by instance admins through the Dataiku LLM Mesh. Technical teams can also use Python code to configure models precisely to their liking. 

With the ease of fine-tuning and RAG capabilities in Dataiku, businesses can optimize their GenAI applications to meet diverse, evolving challenges with power and adaptability.

You May Also Like

5 New Dataiku Features to Streamline Your RAG Pipelines

Read More

Dataiku Is a Gartner Peer Insights Customers’ Choice

Read More

2025 Retail & CPG Trends: Hyper-Personalization, GenAI, & More!

Read More

Keep Track of All Your Models (Including LLMs) With Dataiku

Read More