What Is MLOps & Why It Matters More Than Ever

Scaling AI, Featured Lynn Heidmann

When building an Enterprise AI strategy that is fit to carry the business through economic highs and lows, it’s critical to have systems for monitoring models in production and to be able to quickly introduce, test, train, and implement new models in order to shift strategies or adapt to changing environments on a dime. Enter: MLOps.

figure-climbing-ladder

What Is MLOps

Model-based machine learning and AI is rapidly becoming a mainstream technology in all large enterprises. To reap the full benefit, models need to be put into production; but doing that at scale presents new challenges. Existing DevOps and DataOps expertise is not enough, as the fundamental challenges with managing machine learning models in production are different. 

That’s where MLOps, which is the standardization and streamlining of machine learning lifecycle management, comes in. However, it’s not just a simple application of DevOps practices to machine learning; in fact, there are three key reasons that managing machine learning lifecycles at scale are challenging:

  1. There are many dependencies: Not only is data constantly changing, but business needs shift as well.
  2. Not everyone speaks the same language: Even though the machine learning lifecycle involves people from the business, data science, and IT teams, none of these groups are using the same tools or even — in many cases — share the same fundamental skills.
  3. (Most) data scientists are not software engineers: Most are specialized in model building and assessment, and they are not necessarily experts in writing applications. 
LinkedIn OReilly Introducing MLOps

Building MLOps Capabilities

A robust machine learning model management program would aim to answer questions such as:

  • Who is responsible for the performance and maintenance of production machine learning models?
  • How are machine learning models updated and/or refreshed to account for model drift (deterioration in the model’s performance)?
  • What performance metrics are measured when developing and selecting models, and what level of performance is acceptable to the business?
  • How are models monitored over time to detect model deterioration or unexpected, anomalous data and predictions?
  • How are models audited, and are they explainable to those outside of the team developing them?

These questions span the range of the machine learning model lifecycle, and their answers don’t just involve data scientists, but people across the enterprise. Answering these questions is not an optional exercise — it’s not only efficiently scaling data science and machine learning at the enterprise level, but also doing it in a way that doesn’t put the business at risk. 

Teams that attempt to deploy data science without proper MLOps practices in place will face issues with model quality and continuity, especially in today’s unpredictable, unprecedented, and constantly shifting environment. Or worse than poor quality, teams without MLOps practices risk introducing models that have a real, negative impact on the business (e.g., a model that makes biased predictions that reflect poorly on the company).

You May Also Like

How AI Is Transforming R&D (for the Better)

Read More

6 Ways AI Will Change Media & Entertainment

Read More

AI Projects Lifecycle: Key Steps and Considerations

Read More

Key Marketing AI Concepts (In Plain English!)

Read More