Understanding the Why and How of the LLM Mesh Architecture

Scaling AI Marie Merveilleux du Vignaux

In the latest GenAI Bootcamp session, Chad Covin, technical product marketing specialist at Dataiku, and Nate Kim, sr. field engineer at Dataiku, discussed the common challenges of IT leaders when implementing large language models (LLMs) at scale. They also discussed the solution by introducing us to the Dataiku LLM Mesh architecture. Let's delve into the essence of this webinar.

→ WATCH THE FULL GENAI BOOTCAMP SESSION 

What Are LLMs?

As stated by Nate, an LLM is an AI that is trained to understand and generate human-like textual data based on extensive training data. LLMs are particularly exceptional at generating coherent text, making them a firm favorite for organizations that deal with sizable amounts of textual data.

gap between AI innovation and enterprise reality chart

In the process of incorporating LLMs into organizational practices, Nate recommends five key aspects to consider:

  1. Computation and infrastructure
  2. Privacy
  3. Security and compliance
  4. Choice and dependencies
  5. Ethics and trust
  6. Model accuracy

These pillars are essential to successful LLM adoption and pose their own unique challenges.

The Dataiku LLM Mesh

The Dataiku LLM Mesh is a centralized tool that empowers secure access to the leading GenAI tools, services, and applications. Nate stressed how the LLM Mesh assists IT professionals in creating the infrastructure for GenAI and LLM tasks. Its benefits extend to hosting multiple LLM providers, a secure gateway, protecting sensitive data, controlling costs, and query and response enrichment.

dataiku llm mesh

Described as a gateway for GenAI applications, the LLM Mesh employs the same API feature for interactions. It has the sophistication to interact with third-party LLM services and privately-hosted LLM models and facilitates horizontal scaling and cluster configuration flexibility.

Nate elucidated the LLM Mesh ensures compute usage considerations for locally hosted models. For instance, the number of parameters and precision. Dataiku's auto-scaling setup on elastic AI can automatically scale out to additional GPU nodes and employ tensor parallelism for inference.

Part of the benefit of the LLM Mesh API is its exterior applicability; calling it from other applications or code outside of Dataiku is entirely feasible, providing uniform usage.

Levels of LLM Usage

Through Dataiku, Nate discussed multiple levels of LLM usage, starting with the integration of prebuilt elements, to optimizing the prompts passed to the LLM, enhancing queries, and fine-tuning LLMs. Dataiku provides native recipes to handle all these situations.

Nate then elaborated on applications of LLMs in different scenarios, including batch jobs, model evaluation, retraining, and serving web applications. A prime benefit is being able to access the models via Dataiku’s Python API and the ability to segregate users and applications within the organization rather than using public internet channels. Nate also made note of caching which reduces network rates including model and query caching. 

Flexibility and Trust for Greater Connectivity

The session ended by highlighting Dataiku's integration with various LLM service providers like Bedrock and OpenAI and teasing the forthcoming Dataiku connectivity with AWS, which allows result caching for FAQs.

This webinar provided a comprehensive insight into the deployment and incorporation of LLMs into Dataiku. Chad and Nate extensively covered the benefits of integrating the Dataiku  LLM Mesh for practical, efficient, and secure utilization of LLMs. The width and depth of this subject served by Dataiku's experts reemphasize the continuous growth and complexity of traditional AI and GenAI, and the tools supporting these frontiers.

You May Also Like

4 Barriers CIOs Must Overcome to Drive Analytics & AI Success

Read More

From Vision to Value: Visual GenAI in Dataiku

Read More

The Ultimate Test of ChatGPT

Read More