Debating the Future of AI With TitanML and Dataiku

Scaling AI Marie Merveilleux du Vignaux

The dynamic nature of AI has only amplified with the rise of GenAI, making its future all the more intriguing. During our 2024 Everyday AI Summits in Chicago, San Francisco, and New York, Ren Lee, SVP of Marketing at Dataiku, hosted a panel to analyze the potential future directions of AI in the enterprise. Attendees heard expert insights from Meryem Arik, CEO and Co-Founder at TitanML, Jed Dougherty, VP, Platform Strategy at Dataiku, and Florian Douetteau, CEO and Co-Founder at Dataiku.

This blog features some of the most interesting pearls of wisdom shared during these panels, to help organizations make the most of the promising future of AI. If you’re wondering whether to self host or not, which GenAI use case to tackle first, or which tools are best for specific use cases, you’ve come to the right place. Read on for answers to these questions and more! 

→ Watch All Three Sessions Here

Q1:  What Use Cases Will Drive Value and Return on AI? 

Meryem Arik, CEO and Co-Founder @ TitanML

Since you don't need to train anymore thanks to the rise of GenAI, it means you can fail really quickly. What we've seen in some organizations is there can often be a bit of a paralysis for finding that first use case because you want to find that first use case that will immediately drive ROI. However, I think there's a lot of value in just trying to build something — it doesn't need to be the perfect project. You will learn a lot along the way and you'll start thinking about use cases that you might not have previously thought about. Given that we have so much to build over the next five, 10 years, and it's very, very cheap and easy to start building, I think that just building something is a good place to start.

When I think about good GenAI use cases, I typically turn to internal use cases — and not chatbot ones. People seem to see the chatbot as the LLM 101, but chatbots are actually incredibly difficult to do well. Instead, I would start by identifying parts of processes that you can augment using AI and LLMs that are internal and fairly low risk. Build that confidence with these smaller steps and then go on to the bigger meatier ones next. 

Jed Dougherty, VP, Platform Strategy @ Dataiku

I've been saying to a lot of our clients for the last year that it's almost irresponsible to not try replacing existing NLP projects with LLMs. Organizations have dozens of NLP tasks or workflows already existing in their processes (paragraph summarization, entity analysis, etc), so they should replace that with LLMs, or at least try to. It'll probably work better than whatever they currently have and it will only take a day. You don't have to think about building a chatbot to roll out to your whole organization. You should instead look at what is available right now and start working on that.

Q2: As AI Learns to Carry Out Cognitive Tasks, What Will the Next Phase of Work Look Like?

Meryem Arik, CEO and Co-Founder @ TitanML

I think data people and engineering people will be very safe. The thing I'm more concerned about is whether we, as data organizations, are educating our business individuals on AI. If I am someone in operations, who is going to teach me the value that AI can have for my productivity and the way that I work? We should be helping empower everyone in our organization adopt AI in ways that are sensible and responsible and will help them do their work. We are fortunate enough to be technically literate and already understand some of these implications, but not everyone in your organization will be. We need to figure out how we can train them so they can get the value of practical AI.  

Jed Dougherty, VP, Platform Strategy @ Dataiku

Clément Stenac, CTO and Co-Founder at Dataiku, likes to say that the use of AI is inversely proportional to the originality of the problem, which means that AI is very good at solving unoriginal problems or very good at doing unoriginal tasks, and it's pretty darn bad at doing brand new, original things. What that means is that if your whole job consists in doing the same unoriginal task all the time, you should be a little worried. But if you're coming up with new stuff and doing original things, you still have ways to go.

Q3: What Should Organizations Consider When Choosing to Self Host? 

Meryem Arik, CEO and Co-Founder @ TitanML

I think APIs are really fantastic and if you can get away with your use case using them, I think it's, on the whole, a really good thing.

Broadly, there are three reasons why our clients in particular would choose to self host.

  1. Security and privacy: If you are not already an Azure client and you want to deploy something in your own VPC, or even on-prem data center, then you don't really have another option but to self host. If you really need that total privacy and kind of control, you have to self host. 
  2. Performance: The best of the best model, in general terms, is something like a GPT 4, but when you're looking at domain specific models, you can get better performance by self hosting smaller and specialized models. 
  3. Scalability and cost: When you're starting at a very small scale, using API-based models is much cheaper because you pay by the token. When you start deploying it at mass enterprise scale, then that per token cost starts adding up and deploying a much smaller model in your own infrastructure can be much cheaper at scale.

364_EverydayAIRoadshowChi_240404

Q4: Are We Moving Towards an Ecosystem of Models Dominated by a Few Large Foundational Models or a Diverse Ecosystem? 

Florian Douetteau, CEO and Co-Founder @ Dataiku

There is so much benefit today in the ability to build your own specialized smaller model. For example, when you build the second version of your app, you can look for more accuracy with open models and build on top of that even in the long term, not just short term. 

For many of the use cases that are of interest to us, including text analytics and finding the right information in documents and so forth from models, will be better in the coming years and we will potentially see some form of performance plateau. Because of that, you would use a variety of models in the enterprise, specialized or not. Not just one or two providers calling the shots. 

Q5: What Workflows or Tooling Are Essential for Differentiated AI?  

Jed Dougherty, VP, Platform Strategy @ Dataiku

There are different ways organizations can deploy their LLMs. They can deploy it on prem, with the cloud provider, or they can pick it from an ISV, like Anthropic or OpenAI. 

The choice between those three deployment options or access options for LLMs is going to be based on a combination of the organization's risk profile, how much money they’re willing to spend, and what kind of application they’re trying to build. 

For example, if you're trying to build a multimodal application, until very recently, the big API services didn't do image to text. However, some local Hugging Face models, like Llama for example, would do it. That’s a very specific use case consideration in how you're picking out your LLM.  

The reality is that no large enough company is going to make only one decision. No big company that I work with right now is only running local models, or even only running models with a single provider. A very common use case would be to have Bedrock for one thing, Azure OpenAI for something else, and then deploying Hugging Face models locally. Accepting that deciding between providers and locally hosting models is going to be a reality of your toolbox is critical for whatever comes next and whatever else you're building on top of those things. 

Q6: What Is Top of Mind Today When Thinking of Your Mission to Bring Quality AI to Everyone? 

Florian Douetteau, CEO and Co-Founder @ Dataiku

We started Dataiku with a mission of democratizing AI but, from my perspective, at the end of the day, when I say democratize AI, it's not just about everyone using it. It's also about who is building it. I believe that there is some seismic change in the way AI will transform some business processes, the way and the work of everyone. But in order for that to happen, you actually need to have domain experts be part of the change.

Domain experts need to participate — in one way or another — in the building of models, in the spec of applications, in everything they can do in order to bring the right data in  and fully accept such change. If it's only software engineers trying to encode the life of everyone, it will not work. Or at least not the way we expect it to. So, in the long term, we will continue building our product in a way that enables as many people as possible, especially beyond data scientists, to be able to get into data because that’s the future of work for many domain experts today. I think this was already true back in the early days of data science, and I think it's still true today with the world of GenAI.  

Q7: In the Last Year, What Has Changed and What Has Stayed the Same With GenAI? 

Florian Douetteau, CEO and Co-Founder @ Dataiku

What has changed is all of us moving to a state where we are discovering a new potentially fun consumer application, like talking to a bot. I think we also moved from a world where there was one provider on one model to one where there will be a continuous release of new models and technologies emerging. If you fast forward two years, the technology we will have at our disposal will be very different — more like a zoo of models. This will mean lots of use cases and lots of technologies. This can lead to complexity and stress, but I think it's also an opportunity for accelerating growth.  

Meryem Arik, CEO and Co-Founder @ TitanML

Three years ago, we were working on NLP. Back then, a 100 million parameter model was considered really big. Now, my clients think that seven billion is very, very small. It's like a complete paradigm shift only in about three years.

One of the biggest changes in that last year is the pivot from the chatbot paradigm towards something more search based or properly embedded in a workflow. When everyone was looking at their first enterprise use cases, they were looking at ChatGPT and wondering how they could create chatbots. Now, organizations are starting to realize that chatbots are actually pretty poor first use case and that a lot of the value that is being derived from GenAI does not come from these super sexy shiny chatbots, but rather from things that are far more subtle, more mundane, and more ingrained in processes.

You May Also Like

The Ultimate Test of ChatGPT

Read More

Maximize GenAI Impact in 2025 With Strategy and Spend Tips

Read More

Maximizing Text Generation Techniques

Read More

Looking Ahead: AI Hurdles IT Leaders Need to Overcome in 2025

Read More