In 2018, O’Reilly conducted a survey regarding the stage of machine learning adoption in organizations, and among the more than 11,000 respondents, almost half were still in the exploration phase. It’s probably safe to say that for at least some of those explorers, the prospect of risk when it comes to data and AI projects is paralyzing, causing them to stay in a phase of experimentation.
To be clear, moving forward with a plan in place to address risk is good practice. But there needs to be a balance — a data project risk management plan can’t be so restrictive that people can’t work effectively enough with data to impact the business.
This fear and hesitation can put a wrench especially in wide-scale data science, machine learning, and AI projects. Today, democratization of data science across the enterprise and tools that put data into the hands of the many and not just the elite few (like data scientists or even analysts) means that companies are using more data in more ways than ever before.
And that’s super valuable; in fact, the businesses that have seen the most success in using data to drive the business take this approach. However, it can also expose some risks without the right training — particularly for those not previously familiar with data science processes — as well as tools and processes.
If data democratization is the path forward to eventually enabling AI services, it will require lots of collaboration between business people, analysis, and data experts (like data scientists and engineers) plus potentially rolling out initiatives like a self-service analytics program. O’Reilly breaks down the possible types of risk in its ebook Foundations for Architecting Data Solutions:
- Technology risk
- Team risk
- Requirements risk
Balancing all three is about setting up methodologies and an environment for success by using development principles and strategies for managing and mitigating risk, setting realistic expectations, and providing a guide to building successful teams.