Build a Generative AI Chatbot With Snowflake and Dataiku

Dataiku Product, Scaling AI, Featured Renata Halim

Leveraging AI (including Generative AI) to enhance user interactions across various sectors has become essential. Now, imagine a chatbot that not only comprehends user queries but also delivers precise, contextually accurate responses and has the capability to continuously learn and improve. Achieving this level of sophistication in chatbot technology is now possible through the integration of Snowflake and Dataiku tools and innovative capabilities.

Dataiku and Snowflake Joint ArchitectureA recent webinar featuring Pat Masi-Phelps, Partner Sales Engineer at Dataiku, and Carsten Weidmann, Partner Sales Engineer at Snowflake, demonstrated how advanced chatbots powered by Large Language Models (LLMs) are revolutionizing task automation. They introduced us to Retrieval-Augmented Generation (RAG), a cutting-edge technique that boosts LLM performance by integrating relevant data retrieval directly into the response generation process. The session also covered best practices for evaluating these models, providing invaluable insights for those looking to leverage AI more effectively. Watch the session or keep reading for a recap. 

 —> Watch Now: Build a Generative AI Chatbot With Snowflake and Dataiku

Overcoming the Limitations of Baseline Models With RAG

Traditional chatbots often struggle to generate responses that are accurate and contextually relevant, leading to errors commonly referred to as "hallucinations." These errors can significantly undermine user trust. RAG addresses these challenges by seamlessly incorporating data retrieval into the response generation process. This integration not only ensures that responses are both precise and contextually appropriate but also significantly enhances the chatbot's reliability, thereby improving user interactions and trust.

Using RAG, the LLM is much more likely to give you a correct answer.

-Pat Masi-Phelps, Partner Sales Engineer at Dataiku

Chatbot Development Process Using RAG

The webinar outlined the procedure for constructing chatbots utilizing the combined capabilities of Snowflake and Dataiku, focusing on the RAG technique to enhance accuracy. Here are the key steps:

1. Document Gathering and Chunking

Our first step is to build a robust knowledge base from a diverse range of documents. Considering the context window limitations of LLMs — the maximum text they can process at one time — we segment these documents into smaller chunks, each about a thousand characters. This strategy optimizes the chatbot’s performance by more efficiently managing its data processing capabilities.

2. Embedding and Smart Retrieval

Next, we transform these document chunks into vector representations using an Embedding Language Model (ELM). This conversion enables the chatbot to capture the deeper semantic meanings of the texts. When a user poses a question, our smart retrieval system sifts through these vectors to find the most relevant ones from our knowledge base, ensuring that the chatbot retrieves data that most accurately addresses the user's query.

3. Prompt Engineering and LLM Processing

Finally, we refine the chatbot’s generations by customizing our instructions to the LLM, a process known as prompt engineering. An example prompt we use to build this chatbot is: “Please act as a Sales Engineer for Dataiku and Snowflake. Users will ask you questions about these platforms and their capabilities. Please respond with clear, helpful, and concise answers to user questions”. Prompt engineering helps an LLM respond in the way we want it to.

Evaluating and Fine-Tuning the Model

To ensure the effectiveness of our LLM, we conduct a thorough evaluation of the solution based on four critical criteria:

  • The generated answers must directly address the meaning of the questions asked.
  • The documents selected through smart retrieval (context) should be helpful in answering the question.
  • The generated answers should be well-supported by the documents selected during the RAG process.
  • The generated answers should be correct!

While we may not score a perfect 1.0 on all four metrics, we can try to maximize them by trying out different LLMs and iteratively improving the RAG process.

what to consider when evaluating an llm

Revisiting Document Selection

Continuously reassessing the selected documents' relevance and comprehensiveness is crucial to support the chatbot’s answers effectively. This ongoing process may involve incorporating additional or more detailed documents to improve our smart retrieval capabilities. Such refinement ensures the chatbot maintains high accuracy, precision, and reliability in its responses.

If you're utilizing LLMs in your enterprise environment, make sure the LLMOps part will then pop up at one point…That human feedback loop is very, very important to make sure that you can actually augment the quality of the LLM during its lifecycles.

-Carsten Weidmann, Partner Sales Engineer at Snowflake

Leveraging Industry Metrics and Best Practices

Adopting a structured approach to evaluation is essential, especially for enterprise applications. Providing a list of sample questions with reference answers is highly recommended to ensure the quality of LLM outputs. By applying these samples through our prompts, we can effectively gauge the performance of different LLMs, focusing on two critical metrics — faithfulness and answer relevancy.

This whole LLM RAG pipeline is a system and it's not just the question and the generated answer. It's also the context. It's also the ground truth. And so there are all these metrics that are being created out there in the industry to help measure the relationship between each of these different things. So we really recommend that you guys use all of these metrics. 

-Pat Masi-Phelps, Partner Sales Engineer at Dataiku

Don't just stick with one [baseline model], but actually try at least a second one and probably a third one…to make sure you choose the best one.

-Carsten Weidmann, Partner Sales Engineer at Snowflake

Deployment With Dataiku Answers

In the final stage, we deploy our RAG-enhanced chatbot using Dataiku Answers, a prepackaged and scalable web app engineered for enterprise applications. Designed for seamless integration within enterprise systems, Dataiku Answers simplifies the deployment of advanced chatbots. Key configuration steps include:

  • Logging User Interactions: All user queries and LLM responses are logged to maintain a comprehensive history of interactions, which is crucial for security and analysis in an enterprise environment.
  • Feedback Mechanisms: Users can evaluate the chatbot’s responses positively or negatively. This continuous feedback is essential for ongoing improvement and is systematically logged to refine the chatbot’s performance.
  • Selecting the LLM and Knowledge Bank: Choose your preferred LLM. In the webinar, Snowflake's Arctic and Llama3 were highlighted for their outstanding performance. Additionally, select the appropriate knowledge bank that best aligns with your chatbot’s operational requirements.

Conclusion

With Dataiku, enterprises can quickly create high-performing LLM-powered applications like RAG chatbots. With LLMs hosted on Snowflake and key integrations between Dataiku and Snowflake LLMs, organizations can rest easy knowing their data, prompts, and LLM generations will stay within the confines of their Snowflake environment. As with all data science solutions, teams should pay attention to the performance of a RAG LLM chatbot with metrics like accuracy, faithfulness, and relevancy. 

You May Also Like

Best Practices for a Successful AI Center of Excellence

Read More

How Dataiku Turns GenAI Into Business Gold

Read More

Riding the AI Wave: OpenAI’s GPT-4o and Beyond

Read More

Beyond Text: Taking Advantage of Rich Information Sources With Multimodal RAG

Read More