Accelerating the Path to Enterprise AI with MLOps

Scaling AI Lynn Heidmann

Once an organization has the ability to quickly operationalize data projects and moves from a handful to hundreds (or thousands) of machine learning models in production, the question of maintenance and management arise. Enter: MLOps.

If operationalization was the topic of 2018 and 2019, then model maintenance will be the hot topic of 2020. Because as AI initiatives expand, there needs to be clear ownership and process for how models are  maintained, making sure they are performing as expected and not having any adverse effects on the business.

Though they have significant differences (namely attention to entire lifecycles in the case of MLOps as opposed to simple application delivery in the case of DevOps), the discipline nonetheless has much to learn from the world of DevOps. 

V2 WORDING CONTENT Gartner Flex MLOPS Campaign 2019-1200x628 px

Technology innovation leaders are keen to apply DevOps principles for AI and ML projects, but they often struggle with architecting a solution for automating end-to-end ML pipelines across data preparation, model building, deployment and production due to lack of process and tooling know-how.”

Gartner, Accelerate Your Machine Learning and Artificial Intelligence Journey Using These DevOps Best Practices, 12 November 2019, Arun Chandrasekaran and Farhan Choudhary - available upon request from Dataiku

For organizations who are beginning to accelerate their data efforts, considering an MLOps team or role is critical (or if it's not critical now, it will become so soon). Even if the role is not formalized yet, there should be someone who is at least partially responsible for:

  1. Large-scale operationalization, not just one-off operationalization. Because with more and more models in production, it becomes important to think more holistically about how different pieces are working together as a whole. 

  2. Monitoring models in production, including tracking and visualizing drift over time. Ideally, this would happen in one central location (see: why enterprises need AI platforms). Monitoring also likely involves communication with the business side; even if a model is technically performing well, it's critical to understand how it's impacting business operations, whether positively or negatively.

  3. Inherently included in monitoring, but worth calling out as another hot topic of 2019: looking for any possible issues like bias, especially - though certainly not exclusively - when the outcome could be bad PR for the company.


Go Further:
Read the Gartner Report

Get a copy of the report "Accelerate Your Machine Learning and Artificial Intelligence Journey Using These DevOps Best Practices," compliments of Dataiku. 

Let's go >

 

Gartner, Accelerate Your Machine Learning and Artificial Intelligence Journey Using These DevOps Best Practices, 12 November 2019, Arun Chandrasekaran and Farhan Choudhary - available upon request from Dataiku

GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally, and is used herein with permission. All rights reserved.

You May Also Like

Two Large Barriers to Widescale AI Adoption in Chemical Manufacturing

Read More

How Do I Know If My Model Is Good?

Read More