From choosing AI projects that matter to building the models to deliver these projects, organizations often struggle to achieve real impact. At 2021’s Data Innovation Summit, I discussed what is standing in their way or, rather, what it is that they might be doing wrong.
Low-Hanging Fruit vs. Moonshot Projects: A Cautionary Tale
We’ve all heard the recommendation in some form: “Go after the low-hanging fruit!”
But looking for quick, easy, and high-value problems to solve can lull organizations into a false sense of security, leading them to miss more useful opportunities. How can this happen? Surely every organization has many of this kind of fruit?
The problem is, AI projects that at first look easy and appealing (ripe and low-hanging!) are also very likely the projects that other business processes or teams have already picked. In other words, just because your team can build a more accurate forecast model in an afternoon, does not mean that presenting this victory to your sales team will automatically lead to adoption: What looked like an “easy win” to you, so-called low-hanging fruit had already fallen from the tree.
If we divert our attention from these potentially resource-sucking, low-value outcomes, what is left? The use cases are not so easy, not so fast, and maybe only medium value. These make better starting points because revolutionizing organizations with AI means, precisely, not shying away from the hard work beyond choosing the right data and algorithms.
And you never know, by aiming higher on the tree, you might in that process discover a feasible path from some of these relatively mundane-but-foundational projects to something truly remarkable, a so-called “moonshot” whose risk can be reduced by not attacking it first.
Don’t be a project raccoon and grab onto shiny projects that have large appeal on the front end, failing to evaluate the actual potential for sustainable ROI. If you make this mistake, you will fall into the trap, missing out on the projects that propel other companies, the ones not afraid to take risks, forward.
Caught in Limbo?
So, maybe you’ve already implemented AI solutions in many business processes, but not exactly heeded the advice from above. Now you find your organization caught in a web of low expectations, high maintenance costs, and rapidly accumulating technical debt that has brought your AI productivity to a halt. How does an organization kick out of this time-space continuum, get more AI projects in production, and tackle those moonshot projects to build a successful future?
A good place to start is understanding the limitations of your operating model, whether you adopted it explicitly or just grew into it over time. Being aware of the problems, you can get a head start on crafting an AI approach that opens up opportunities for growth, recognizing the resource limitations you have.
Different Operating Models & Their Weaknesses
- The decentralized operating model: no business unit develops sufficient capability to tackle use cases beyond a certain complexity, which prevents anyone from delivering use cases with the highest potential value (they are simply out of reach)
- The centralized model makes the opposite trade-off: one team develops all the capability to tackle more complex use cases, but this necessarily raises the minimum value of a use case they can deliver (to achieve desired ROI)
- The hub-and-spoke operating model attempts a compromise: central capability for high-complexity use cases, while decentralized capability will deliver the remaining value (leaves huge swaths of potential value untapped because no one has the capability AND mandate)
Now that we know the weaknesses of each model, what’s the takeaway? While every model above acknowledges a role for both data and AI teams and business teams, the collaboration between them is not automatic — even in the hub-and-spoke. If you want the synergies that emerge from this collaboration, you must intentionally design and integrate practices that support collaboration into your processes, irrespective of which model you choose.
Design Your Workflow for Innovation
At the end of the day, it is YOUR organization. What worked out perfectly for one company could be a great starting point for another company’s strategy, but the people and business processes that make an organization tick are nuanced, and choosing the right combination and sequence of AI projects requires experimentation.
One thing many of us learn the hard way is: models should never become our pets. If the first model you choose is not working out, do not be afraid to rework your approach. The sooner you are able to rework and fine-tune the better, but be careful not to keep investing time into a model that isn’t working. Don’t compromise the future success of your AI strategy just for the sake of sunk costs from a model that isn't working out.
So, how can you build a successful model and a workflow that delivers more of these over time?
First, remove barriers to rapid innovation — this means not only ensuring expert data scientists can move quickly but creating a safe playground where people of all different backgrounds can test ideas. Many, or perhaps even most, of the best ideas for AI projects will come from outside teams of data experts, and empowering them to try out those ideas at low cost generates more good ideas for experts to harden into finished products. And, second, automate as much of the busy work as possible required to take the best experimental ideas and make them fit for production. You might be surprised how many organizations have dozens or hundreds of great ideas sitting at the “experiment” stage, with no resources to actually capture their value!
A Final Tip
Remember that the true value of anything is not usually obvious, on the first examination. To fully reap the rewards of AI, you must constantly evaluate and adjust how you choose what to work on, and how to deliver that work. Without this, you will miss out on the value that initially presented itself as just pure risk, and in turn, risk getting stuck on low-hanging fruit that was much less valuable than it first appeared.