Why Operationalization Is Hard (But It Doesn’t Have to Be)

Scaling AI Sofiane Fessi

Flashback to 2004, start year of Project Purple, which would later give birth to a piece of engineering that has shaped the world as we know today: the iPhone. In 2006, after $150 million invested in R&D, design, and product development, over 1,000 employees involved — and not the least important of them — day and night, the prototype of the iPhone was finally ready for mass production.

gears-operationalization

The stakes were vertiginous. Two years of work, excitement to the maximum at every level inside the company, from designers to engineers to board members. Massive amounts of effort and investment, and above all a potentially fate-changing milestone for Apple.

Now imagine this response to Steve Jobs from Mr. Buzz Kill, Apple Head of Manufacturing (that I just made up).

Dear Steve,

It is with great attention that I have looked into the prototype and details of Project Purple. I can honestly say it is the best looking invention any company like ours could dream of.However, I hate being the bearer of bad news, but there is not a chance we can put this in production. The material used in the prototype development phase can never hold up in a context of mass production. Besides, the language used by your programmers is completely different than what we use here for the software side. It would take us years to recode everything, and costs would go through the roof.

As the adaption of your prototype for mass market would require too much of our resources and focus, we would have to stop all other ongoing efforts around Apple products already in production.

Deeply regret not being able to take this on, but hopefully you can manage to build that other product that will make Apple the biggest company in the world.

See you at lunch!

Buzz

Completely absurd right? Firstly, because it didn’t happen that way (far from it). And it would be completely unimaginable from a publicly listed tech giant to fail so miserably, leveraging such talent and financial resources to end up with a zero associated revenue.

Operationalization: Buzzkill in the Age of AI

Yet still, in 2019, this is precisely what happens with many organizations when it comes to AI. Because of a number of reasons, both technical- and people-related, it has historically been very difficult to industrialize and get actual value from AI projects.

buzzkillIf not set up success, the process of operationalizations can be a buzz kill for data teams.

The reason for that? Deployment of data science and machine learning projects is hard. Whether you personally work closely with data teams and IT folks or not, it’s very possible that you are familiar with how difficult it is for organizations to get it right, and at scale.

It consists of taking the data scientists’ work from prototype to production stage. That means from a draft on someone’s laptop to a data product that performs heavy computation every day, be it to make predictions (recommender systems, churn etc.) or for more standard applications like data processing.

Indeed, deployment of AI solutions is a common issue faced by organizations worldwide that want to get started (or even move past getting started and into scalability), and it’s not the most trivial issue either. Here are just some of the reasons why operationalization is hard to get right, even for the largest of companies with lots of resources.

The Robustness Challenge

After weeks or months of work on a project, a data scientist ends up with a prototype. Imagine a data scientist at Netflix building a recommendation engine that helps make suggestions for the next thing customers want to watch.

Yet until this prototype gets passed to IT to be actually implemented in the Netflix IT architecture, this project is just a prototype — a draft that has been run on a data scientist’s machine with sampled, reduced-size datasets.

More importantly, the data scientist will likely have used a variety of libraries on his laptop that are not compatible with the company’s IT architecture. Besides that, the model might be run only on manual command (for example a click), which still requires human intervention. No mechanisms are in place to schedule or automatically trigger a daily (or weekly, monthly, etc.) update to the model or to check its consistency in production.

Data Science Can Get Messy

Tools used by data scientists can greatly vary - each data scientist has his or her own preference on using certain languages (Python, R, MatLab etc), and within those languages, specific libraries (and sometimes even not the same versions of those libraries).

relay-baton-handoffOperationalization should be — but rarely is — a well-coordinated handoff between data and IT teams.

For R users alone, about 10,000 libraries are available. If the Python ecosystem is more orderly, you can still imagine the plethora of tools that a data science team of, say, 20 people can be leveraging to build models.

The result is that data science teams often end up working with inconsistent tools, some on their laptop, some on a server, some directly in database, some in their own filesystem. If you’re picturing the IT version of a scientist’s lab or mechanic’s garage, you’re not far from the truth. Data science is an iterative process, and it takes a lot of work and experimentation to get a model right.

But IT Can’t Be Messy

Nearly everyone within an organization has some interaction with IT, whether it’s just to get a laptop fixed or for more complex projects. Whatever the situation, the very first thing any IT department hates is surprises. Their mission is to keep (often complex) architecture stable, integrating all the different systems — billing, web analytics, client data etc. — that will allow their organization to run smoothly, efficiently and safely.

For everything in production that is running live, IT needs it to be perfectly under control. What exactly does in production mean? Well, for example, when you’re buying a ticket from an airline, you might be browsing offers and making a purchase. All of these actions will be going through a booking system — or a payment solution linked with a marketing platform — and all of those systems available to customers are live and called in production.

puzzle piecesWhen it comes to data scientists' work, IT is often tasked with creating order out of relative chaos, which can cause team friction.

When it comes to integrating data scientists’ AI-powered projects in production, they inevitably add more complexity and risk. In other words, while data scientists need to be able to work with a variety of tools (languages, libraries, clusters, etc.), the mission of IT is to maintain a perfect, pristine environment for production. Oftentimes, these two things seem to be (or are) at odds — but they don’t have to be.

Robustify to Operationalize

Obviously operationalization is a challenge, but despite that, there are many data science projects, or data products, that are running live in organizations today. That’s because they have price of integrating and what we call “robustifying” the model.

Robustifying a model consists of taking a prototype and preparing it so that it can actually serve the amount of users in question (millions in the case of Netflix, but perhaps hundreds or thousands for your average business), which often requires considerable amounts of work, typically from data engineers.

What kind of work? In many cases, the entire model needs to be re-coded in a language suitable for the architecture in place. That point alone is very often a source of massive and painful work, which results in many months worth of delays in deployment. Then once done, it has to be integrated into the company’s IT architecture, with all the libraries issues previously discussed. Add to that the often difficult task of accessing data where it sits in production, often burdened with technical and/or organizational data silos, and you’ll understand why operationalization of AI, aka allowing an AI model to enhance operations, is extremely tough.

Unintended Consequences

The consequences of the hardship of pushing to production on data science teams are at least the following two:

  1. Either data science teams anticipate that their work will have to be passed to IT. For that reason, they reduce their array of tools (languages, libraries etc.), or simplify their models to a maximum. A side-result is that there is a need for data scientists to cherry pick which project they work on and deploy. Indeed, why work on 15 projects at a time if you can only deploy one project per year? The consequences for all of these things is a reduction of the impact of the project and an overall limitation of what the data or AI team can accomplish.
  2. Data science teams don’t anticipate the issues that will arise to deploy their prototype. In this case, not only do the negative consequences mirror the previous point, but add to that a deadly tension between data science and IT teams (and potentially among other areas in the business, possibly all the way up to the board). Millions of dollars invested in AI and with no result - this rarely goes unnoticed from top management and shareholders.

But Wait! There’s a Better Way

Does all of this make deployment of data science projects seem very difficult and like a very inefficient use of precious human and technology resources? That’s good, because it is! Or at least, it can be.

But the good news is that it doesn’t have to be this way. More and more companies are turning to the benefits of a data science, machine learning, or AI platform that can ease the burden of operationalization by providing more efficient means to deploy data projects into production quickly and — critical from an IT point of view — safely. 

You May Also Like

The Ultimate Test of ChatGPT

Read More

Maximize GenAI Impact in 2025 With Strategy and Spend Tips

Read More

Maximizing Text Generation Techniques

Read More

Looking Ahead: AI Hurdles IT Leaders Need to Overcome in 2025

Read More