The below article is an excerpt from our e-magazine "AI in 2024: Hot Takes From Dataiku, Deloitte, & Snowflake" and features insights from Ahmad Khan, Head of AI/ML Strategy at Snowflake.
In the age of Generative AI, the easy access to cutting-edge models is quickly setting the stage for enterprises to realize that the real differentiator is their proprietary data and the models that are customized or fine-tuned with that data. At the same time, Generative AI is quickly making advanced AI accessible beyond highly technical data scientists by making the interface between humans and digitized information to be natural language rather than code.
The combination of these two trends means it is critical for enterprise leaders to establish a solid foundation for data and custom models as well as define a strategy that focuses on securely delivering AI in the form of natural-language-oriented application interfaces.
Build and Deploy LLM Apps in Minutes
Similarly to other AI processing such as machine learning models used for predictive analytics, Generative AI demands large volumes of data processing with specialized compute environments. The common approach to advanced processing has been to enable AI teams to gain access to the enterprise data and make copies of it to use on their own platforms.
This approach to move data from its governed source has caused pain points by creating new silos and proved to be a less than ideal process for security teams. These teams need to analyze vulnerabilities as data gets shuffled across a wide range of compute environments that process data and serve model results.
A modern approach is to enable the data scientists and other developers to do their advanced processing where the data is already curated and governed. By reducing data movement, developers can both iterate faster as part of their development cycle and reduce security and operational complexities to take projects to production.
Built with data-intensive processing in mind, Snowflake offers scalable infrastructure and Large Language Model (LLM) application stack primitives that enable developers to build apps in just minutes without moving data or creating copies. This includes Snowflake Cortex, which brings access to leading LLMs such as Llama 2, as well as Snowpark Container Services, which supports the execution of models packaged as containers.
Use AI in Everyday Analytics Within Seconds
As part of a comprehensive Generative AI strategy, data executives must also identify paths to expand adoption beyond the AI experts and drive innovation among the analysts and business teams. This could be done by enabling multiple teams without engineering backgrounds to use LLMs.
This could include analysts using LLMs as part of familiar SQL functions as well as enabling business teams to use applications with graphical user interfaces that keep data and processing in Snowflake. Snowflake offers first-party applications with pre-built UIs such as Document AI to help non-coders search and get answers from PDF documents.
Getting the most value from Generative AI will require organizations to define a holistic strategy that first establishes a robust data and model governance and then enables developers to accelerate LLM app development and analysts to leverage AI as part of everyday analytics. 2023 was a pivotal year for enterprise AI and we look forward to partnering with more organizations as part of their AI strategy in the coming year.
Ahmad is the Head of AI/ML Strategy at Snowflake where he helps customers optimize their ML workloads on Snowflake. He also works closely with the Snowflake product team to help define the AI feature set within Snowflake based on the voice of the customer. Prior to Snowflake, Ahmad spent over four years at AWS where he focused on the AWS stack of ML services and was involved in early proof of concepts for AWS SageMaker. Ahmad holds a Master’s in Electrical & Computer Engineering from University of Southern California.