Why (and How) to 'Call Bullshit' on AI

Data Basics Lynn Heidmann

By the very nature of the phrase, we think of data science — and, by extension, machine learning and AI — as, well… a science. And traditionally, those of us on the business side or people not coming from technical roles and disciplines don’t feel comfortable questioning results from AI systems; after all, we’re not data scientists, so we’re not the experts, right?

magnifying glass on blue background

Yet as AI becomes increasingly democratized and more widely used across organizations, understanding enough to challenge and question results from predictive models or AI systems will become an increasingly important skill, especially for business leaders. 

The good news is that you don’t need to be a data scientist to identify and call out problematic models or AI systems. All you need are a few basic logical and analytical skills, and probably a copy of Calling Bullshit: The Art of Scepticism in a Data-Driven World, by Carl Bergstrom and Jevin West.

“The central theme of this book is that you usually don’t have to open the analytic black box in order to call bullshit on the claims that come out of it. Any black box used to generate bullshit has to take in data and spit results out… Most often, bullshit arises either because there are biases in the data that get fed into the black box, or because there are obvious problems with the results that come out. Occasionally the technical details of the black box matter, but in our experience such cases are uncommon. This is fortunate, because you don’t need a lot of technical expertise to spot problems with the data or results. You just need to think clearly and practice spotting the sort of thing that can go wrong.”

 — Calling Bullshit: The Art of Skepticism in a Data-Driven World, by Carl Bergstrom and Jevin West

→ Get the Ebook: Black-Box vs. Explainable AI — How to Reduce Business Risk

Top 4 Tips for Business Leaders

If you want to become more confident in your ability to question the results of data science projects, machine learning models, or AI systems (all with a goal of reducing the risk to the business of potentially embarrassing PR or poor-performing systems that impact the bottom line), here are four ways to start:

  1. Get up to speed with the basic data science and machine learning lingo. It will be infinitely easier to communicate with your technical counterparts if you’re making strides to speak the same language. This introduction to key data science concepts is a good starting point. 
  2. Brush up on correlation vs. causation and other statistics basics. Calling Bullshit underscores the point that one of the most frequent misuses of data is suggesting a cause-and-effect relationship based on correlation alone. We’ve got you covered with 14 must-know stats and probability terms.
  3. Learn how to ask critical questions about data visualization. Is the story the visualization is telling honest? Is the format the right one to tell the story at hand? Is the scale selected for the axes distorting the story?
  4. Don’t forget the golden rule: If something seems too good (or bad) to be true, it probably is. AI platforms (like Dataiku) can help facilitate this best practice — for example, Dataiku’s what-if scenario feature, in addition to other explainability features, builds trust in predictive models as business users can see the results they generate in common scenarios and test new scenarios.

You May Also Like

Fine-Tuning a Model (In Plain English!)

Read More

How to Reach the Apex of Data Preparation

Read More

How to Address Churn With Predictive Analytics

Read More

What Is MLOps?

Read More