How Can We Manage the Risks of AI?

Scaling AI Shaun McGirr

In the first article of this series, we learned some of the many ways AI already influences us at different stages of life and how important our own data is in the development of AI. Now, we’ll learn about how the risks of AI can be identified, balanced, or even avoided, as individuals, teams, and societies.

What are those risks? They range from the innocuous, such as a clearly wrong answer from a chatbot, (which you already knew could be ignored), all the way to disproportionately harsh prison sentences for particular ethnic groups on the basis of biased algorithms.

And while it’s these impacts that make the news, just as troubling is how easily people fall into a chain of decisions that lead to these negative impacts. Ultimately, much of the harm caused by AI is not intentional, it comes from naivete, ignorance, or even negligence, from not consciously engaging with this powerful technology, which we might not always understand.

Experts in the responsible use of AI break down risk mitigation into three themes: explainability, traceability, and accountability. I’ll illustrate all three risk dimensions in the following sections.

risky hanging over a rock

Explainability 

Explainability simply means that when we use AI to make decisions, especially about others, it should be possible for those affected to understand how the decision was taken and to challenge the decision on the basis of that understanding. We’ve all felt victim at some point or another to a paper-based decision we thought was arbitrary, and this is no different with AI except for one important detail: The complexity of an AI algorithm can be beyond any human’s comprehension. 

In practice, this means those of us building AI-powered products can be more responsible users of AI by opting for simpler, less black-box algorithms wherever possible, even if they might perform less well at a given task. What we get instead — explainability — is often more important. Another nuance arises in the age of Generative AI, when much more powerful and complex pre-trained models can be used by a simple API call — in these cases explainability is nearly impossible, generating an even sharper trade-off. And for those of us using AI-powered products, remember that in almost every jurisdiction, we already have the right to ask how an automated decision affecting us was reached, and we should use that.

Traceability 

Traceability of AI means the same as any digital system: who built what parts of it when, on which data was it trained, and who monitors it? This is important because you could have a very explainable AI system, built by people who no longer work in your company, who perhaps wrote inadequate documentation, but who left it running because it was valuable. 

As AI regulation gathers steam in different jurisdictions around the globe, this approach will become increasingly unacceptable, in the way it already is within financial services. So if you’re building AI-powered products, ensure you can answer all those questions about your product. For those using AI-powered products, choose products that are transparent about who built their AI components, on what data, and with which safeguards.

Accountability 

The final dimension of Responsible AI is often the toughest to tackle. Explainability has a technical meaning and even mathematical definitions, which allow it to be measured in multiple ways, optimizing for different goals. Traceability is more about the processes by which AI is built and run, so it is less clear-cut but still very auditable, like any other process. But accountability cuts deeper: When something goes wrong, whose fault is it, and who pays the price? We are relatively good at handling this when it is humans making decisions, but much less experienced when it comes to AI.

The 3 Risk Dimensions as They Relate to ChatGPT 

We can illustrate all three risk dimensions, and the challenging trade-offs they pose, just by reference to ChatGPT as it stands today. As one of the most sophisticated AI models ever built, it is, by definition, unexplainable. How it got to any particular answer to any particular question is not explainable to a human, precisely because many of its capabilities in summarizing and translating between languages and concepts are superior to our own. So if you want to use it in a regulated setting, where someone might ask you to explain, “Why did you do that?” you might well answer, “ChatGPT told me so,” but you won’t be able to explain why ChatGPT told you so. In some applications of AI, that will be fine, but in many others it will not.

Traceability is all about how AI is built and maintained, and here again ChatGPT represents a challenge: That know-how is the proprietary intellectual property of OpenAI. While they are unlikely to ever fully open up the black box to explain how it works, they could publish all the steps they took to build it … but that is not their business model. What this means is that we can only consume ChatGPT at a distance, we cannot run a version ourselves, fully under our control. Once again, fine for some use cases, not for all.

Finally, the accountability trade-off: Regardless of how we balance the risks around explainability and traceability, there is a separate decision about who takes the credit when things go right and the blame when things go wrong. Managing risks of AI means being ready for the potential scenarios where those risks mature into damages or harms. While legal frameworks concerned with liability for damage or harm caused by AI continue to evolve, it falls to every organization to decide for itself. Effectively addressing the risks discussed by producing audit-worthy evidence of model development, use, quality control, and lifecycle management, is a strong start! In financial services, for example, these tradeoffs are well-understood already but in many other industries it is new territory.

Looking Ahead 

If you’re a little overwhelmed by these risks, you’re not alone, and that’s good, because these trade-offs cannot be settled by any one of us alone. And that starts with asking the right questions about any given AI: how explainable are its recommendations to humans; how traceable is the process used to build it; and who is accountable and responsible for it?

In the final article of this series, we’ll pivot from risks to opportunities for applying AI in your daily work.

You May Also Like

Explainable AI in Practice (In Plain English!)

Read More

Democratizing Access to AI: SLB and Deloitte

Read More

Secure and Scalable Enterprise AI: TitanML & the Dataiku LLM Mesh

Read More

Revolutionizing Renault: AI's Impact on Supply Chain Efficiency

Read More