How Does Dataiku Work With the Cloud Providers?

Dataiku Product, Scaling AI Lynn Heidmann

Dataiku helps organizations quickly realize the value of cloud providers — e.g., Amazon Web Services (AWS), Google Cloud Platform (GCP), and Microsoft Azure — and adopt data science practices for multi-cloud AI at scale. But what does this mean in practice?

→ Download the Ebook: 3 Keys to Modern Data Fundamentals

Watch Shaun McGirr, EMEA RVP of AI Strategy, give an overview on how Dataiku works with the big, public clouds, plus a real-life example of how one IT team was able to massively scale out their service failure detection system using the elastic scale of the cloud plus the orchestrational and collaborative capabilities of Dataiku.

Dataiku Is for Everyone

Since 2013, Dataiku has specialized in one specific problem that every large enterprise asks when trying to scale their AI efforts: How do you bring experts and non-experts into one place to collaborate on this kind of work?

Dataiku is both end to end (from finding data to data wrangling, applying machine learning and pushing to production, DataOps, MLOps, governance, and everything in between) and cloud agnostic. This makes it easy for users of all levels to quickly go from data exploration and preparation to fully built out AI applications all in one place, without siloing that work to exclusively technical experts.

Dataiku for Data Scientists: A Real-Life Example

Dataiku’s end-to-end data science platform deployed wherever, whenever enables organizations of any size to deliver enterprise AI in a highly scalable, powerful, and collaborative environment. But just because it’s for everyone doesn’t mean the experience is suboptimal for data scientists and other technical professionals.

For example, take the team at a large telecommunications company managing a massive and complex IT stack. A team of IT data science experts built deep learning models leveraging their cloud providers’ tools, and they had fantastic results — the models were able to spot emerging service disruptions in ways that humans could not.

The problem in their case was scale. They estimated that given the time it took for them to craft those deep learning models and to get them up and running, it meant that across the 40,000 services that they would ideally instrument and measure, it would take 40 years to get there and do all the work. Instead, they leveraged Dataiku to scale much faster, using it as an orchestration layer to handle (and automate) much of the work that they didn’t need to do themselves.

You May Also Like

Alteryx to Dataiku: Working With Datasets

Read More

I Have AWS, Why Do I Need Dataiku?

Read More

Talking AI Democratization With Dr. Anastassia Lauterbach

Read More

Why Data Quality Matters in the Age of Generative AI

Read More