Practical Steps to Avoid AI Accidents

Scaling AI Sebastian Werner

Have you heard about rogue AI? One classic example is the Tay chatbot, which was trained on the comments section of the internet and, in hindsight, did exactly what a comments section does. There’s no shortage of other examples, because when AI goes rogue it is sure to make the news!

While terrifying and entertaining in equal measure, the public focus on rogue AI can blind those responsible for generating business value from data (let’s call them “data people”) to another much less obvious risk: AI accidents.

But what exactly are AI accidents? AI accidents can mean two things: 

  1. You have AI that goes rogue accidentally (but you knew you were building AI).
  2. Or you have AI by accident (meaning you didn’t know that you were building AI and thought it was something else, perhaps less risky, until…).

    warning sign

If you’ve ever put in all the work required to build AI of any complexity, it might seem strange that such accidents can happen. Surely, projects always start with well-defined business needs, then data is collected and prepared, and then models are trained and compared, with an appropriate balance struck between performance and other important considerations like fairness, right?

While that might be true for AI built from the beginning in a collaborative way in platforms like Dataiku, it is definitely not how much of the AI out there in the world is always built.

AI builds on the same schemes that machine learning, such as classic regression techniques, introduced decades ago — the difference is surely in the scale and the complexity of the methods. However, the basic problems did not go away: Extrapolation outside the training range is problematic, having too little training data to get a statistically sound model fit, and correlation vs. causation issues obstructing interpretability.

Consider these examples:

1. A bank in Central Europe needed a risk model to pass stress tests. The model they used started off as a rules-based system two decades ago and ten years ago added more variables to be considered, along with more data sources. It was judged “compliant” by regulators for the purpose it was designed. And alongside this, development of these stress testing processes created new sources of data giving a new aggregated representation of the balance sheet of the bank. This, in turn, became the source of more advanced models to monitor liquidity limits. But the underlying requirements of data engineering for a rules-based approach and for advanced models are not the same. And as foundations were not suitable, the advanced model started delivering poor performance, all of a sudden and unexpectedly.

2. A pharmaceutical manufacturer struggles with a forecast model for sales: While it accurately predicts sales for some regions, for others it is always (but unpredictably) off, which affects sales team target-setting, hinders performance, and even sometimes makes people leave the company. It was built by a single expert, stored and executed on their laptop. When this expert left the company, the model was handed over to others, but everybody is too scared to touch it. Despite the fact that it has clearly identified biases, it is still running today. Leading to questionable decision making on definition of sales targets and monitoring of performance. 

In both of these examples, the solution to the business problem did not start out as AI, but at some point resulted in AI accidents. 

What Are the Risks of This?

Unintended behavior is surely not desirable as it can lead to wrong, inaccurate, or even dangerous outcomes — which is why we should think about this a little bit more in detail. And to be frank: Accidents can happen not just to chatbots, but also to all other AI and, more precisely, to all types of models

The model itself may not care that it’s wrong, but your business, regulators, and customers will, which can not only lead to risk and compliance issues and competitors moving in to act on new markets and trends, but also money lost. But, what are the warning signs of AI accidents?

  1. Models built for one purpose generate outputs that end up being used for other purposes without having insight into explicit and implicit assumption that were previously taken and may or may not be documented: Even simple rules-based models, when chained together in this way, might start to exhibit emergent behavior we would only expect from more sophisticated systems (the banking example above). And while capitalization and reuse should be the norm for the right economics, careful monitoring systems are a must have to make this robust.

  2. Models that started off as an individual’s “pet project” to help, with the best of intentions, answer a concrete question of real business importance, become god-like in their impenetrability (maybe the creator left…) or their respect (nobody dares question what has always worked). Again, the distinction is not the methods used but the extent of delegation of crucial decisions, to an automated system, with unpredictable results at the extremes — without consciously addressing the tradeoffs and monitoring the consequences all the time.

  3. Proprietary, black-box models bought or licensed from third parties for a particular purpose. You might have thought you were simply buying a tool to help screen CVs, but from many regulators' perspectives, you've actually embarked on an AI journey, with destination and vehicle unknown — as the exact assumptions, training, and biases that went into this black-box are unknown to you — and you have little chance to know for sure, that you are using it right. Or, in other words: You may already work with models that are operated outside of the range they were developed for — knowingly or unknowingly. 

So, how can teams mitigate the risks associated with AI accidents? The answer lies in clear governance and MLOps processes, which we’ll discuss in the next section.

Best Practice Approaches to Limit AI Accidents

Because AI systems are not perfect optimizers, and because there may be unintended consequences from any given specification, emergent behavior can diverge dramatically from ideal or design intentions.” -Pedro Ortega et. al., DeepMind Safety Team

To effectively manage AI accident risk, it’s best to create a clear goal and identify what is out of scope in order to compare that with reality. Identify:

  • What your ideal goal of what your algorithm should do (ideal spec)
  • How to break it down and implement it (design spec)
  • What actually got implemented, which may or may not behave in the way you intended it to (real behavior)

That also means, writing down expected ranges on both the input and output, as these can be assumptions as well! Then, once you’ve done that, you can incorporate these steps:

1. Establish scalable oversight.

If you only have one model, you might be able to set up a committee to discuss, benchmark, and sign off on it. However, that becomes impossible with hundreds or thousands of models. So, in advanced situations, you can support human sign-off by having multiple types of AI debate. For example, in Dataiku 10, visual model comparisons give data scientists and ML operators side-by-side views of performance metrics, features handling, and training information — to aid in both model development and MLOps workflows. Further, users can have mandatory sign-off and approval of models before they can be deployed in production, which can include multiple and customizable reviewers and approvers.

Further, by inferring human preferences from behavior, you can specify a goal (design specification) and take hundreds of decisions that humans have made and aim for the machine to mimic human behavior in the best possible model — which can also benefit from human sign-off. Developing and deploying AI projects and models without proper oversight can result in poor performance and unintended impacts on customers and the organization. With enterprise-grade governance and AI portfolio oversight (like in Dataiku), teams can implement standardized project plans, risk and value assessments, a central model registry, and workflow management for reviews and sign-off.

2. Test, verify, and validate extensively, regularly and from different perspectives. Then, continuously monitor and evaluate the results.

Teams can act on deviations by iterating on their process (i.e., via CRISP-DM) and involving people from diverse backgrounds, skill sets, and technical and domain expertise. As we shared in our annual trends report, 2021 was the year that a significant number of organizations had the realization that they will not scale AI without enlisting diverse teams to build and benefit from the technology — and MLOps is a critical part of any robust AI strategy. AI Governance in Dataiku provides a dedicated watchtower in Dataiku where AI program and project leads, risk managers, and key stakeholders can systematically govern projects and models and oversee progress across the entire AI portfolio.

3. Expect and design in a way that you design for failure right away.

When you build models, you measure and analyze it in a way that you have benchmarking cases for best possible performance. If you fail, you explicitly test out situations to bring AI to its limit and see what the impact of the decision on business processes is. This is kind of a risk evaluation: Not only check performance of the model itself, but check what the “blast radius” of your model is and how it impacts your adjacent / down-the-line decisions. So we recommend building safe guards around it, especially for AI in critical business applications. For more on this topic, check out the early release chapters from O’Reilly “Machine Learning for High-Risk Applications: Techniques for Responsible AI” here.

4. Make sure that you are training your AI in a reproducible way that systematically checks for bias and be aware that there may be unconscious bias as well.

Models built with biased data will likely produce biased predictions — the models themselves don’t care, but your customers (and the business!) likely will. With the right tools and processes, data scientists and their co-builders can produce models that deliver more responsible and equitable outcomes. For example, Dataiku offers disparate impact analysis to measure whether a sensitive group receives a positive outcome at a rate close to that of the advantaged group and subpopulation analysis offers users to see the results by group. Both analyses help to find groups of people who may be unfairly or differently treated by the model.

5. Start talking about the mishaps — and raise awareness.

Raising general awareness and educating both management as well as the people involved in AI projects about "AI accidents” is a first step — and openly talking about it the next. In science, the “negative results” are typically not published, but this practice can lead to many other people going down that same wrong path! Teams should facilitate information sharing about AI accidents and near misses — the use of AI-embedded technology is an emerging field, so not everything is going to work on the first try, but the goal should be to learn from any mistakes or unintended consequences. Further, teams should invest in AI safety R&D and AI standards development and testing capacity.

Looking Ahead

AI, like any algorithm, needs a clear governance strategy and business processes around it. No matter which type of model you use, you need to verify and validate it because the model itself won’t care if it’s wrong. The onus is on you to monitor the performance of your models through end-to-end lifecycle management because, in the end, performance is what matters to business continuity. Moving forward, you should:

We also recommend not to limit the application of best practices to just the “new kid” AI, but rather apply that same rigor to every other kind of model that automates important decisions based on data. An artificial line separating “new” from “old” modeling approaches plays down the potentially greater risks of the latter. This way we can truly enable scaling AI by limiting risks.

You May Also Like

Explainable AI in Practice (In Plain English!)

Read More

Democratizing Access to AI: SLB and Deloitte

Read More

Secure and Scalable Enterprise AI: TitanML & the Dataiku LLM Mesh

Read More

Revolutionizing Renault: AI's Impact on Supply Chain Efficiency

Read More