There was and is always a lot of concern of how AI might reshape and change our daily lives. Marc-Uwe Kling published his book “QualityLand” in 2017 — a satire that masterfully shows the overwhelm of the future of civilization by AI. The book is an exaggeration from what we are all already to some extent experiencing and can relate to. It raises an awareness of how important parts of our lives can be taken over by AI without a certain level of intervention and responsibility placed on a human.
There are a number of undeniable references to large companies, who are changing the world of digitalization and AI today, which inevitably brings up a topic of public concern and trust in AI. It also raises stakes at which absence of control over their data and machine learning (ML) models can damage a company’s reputation.
So, how can we equip ourselves today to prevent AI from going out of control tomorrow? Let’s see how Dataiku empowers stakeholders to gain a deeper understanding of model behavior, enhances trust, and ensures Explainable AI projects and initiatives.
In Kling’s future, people are rated from 1 to 100 based on a number of characteristics (job, age, health, creativity, IQ, EQ, etc.), and, without a good rating, you are devalued to “useless.” Based on this rating, the model finds you suitable friends and partners. Yet Peter, the main character in the book, does not seem to be happy with the choices of AI. When he is with his friends he is supposed to like, he gets in a bad mood and he is utterly unlucky in love. Can we assume that the model does not function properly, or maybe it is not representative for someone like Peter?
Making sure that reliable data is used for the model, removing any biases, and choosing the correct set of modeling techniques and validations is essential for making the model usable. Dataiku offers an extensive amount of metrics for evaluating ML models, depending on whether you are working with a regression model or a classification model. Once training is finished, you can have a look at the confusion matrix, calibration curve, ROC curve, and other performance metrics. You can even run subpopulation analysis to help you assess if the model behaves identically across subpopulations (e.g., age group), since prediction outcomes for one group over another can lead to biased outcomes and unintended consequences when put into production.
Interactive reports on model interpretation, i.e. feature importance, partial dependence plots, subpopulation analysis, and individual prediction explanations, and model performance in Dataiku Visual ML
These metrics can be displayed for a particular model in question or compared with other candidate models across key metrics side-by-side. AI-powered dating and friend apps today already use advanced algorithms to analyze and interpret users’ data, and raise potential for successful matchmaking. Making sure the algorithm is performant and representative for all the different groups of people is crucial to offer more personalized matches.
In the book, Peter runs a metal press in which he is meant to destroy robots that have malfunctioned or simply have glitches. Unlike the slogan of the robot running for a president “Machines don’t make mistakes,” Peter witnesses otherwise: There is a drone who's afraid to fly, a combat robot with PTSD, etc. We can conclude from this that there is a concern that models do not always perform as expected and aren't helping (and are, potentially, even harming) us where they are intended to be used. It is true that when an ML model is deployed in production, it can start degrading in quality fast, and without warning.
As data changes, one must continuously monitor and evaluate model performance to ensure the model is behaving as expected. The model evaluation store in Dataiku allows data scientists to continuously monitor model performance and scan for any drift in input data and prediction results. In combination with status checks on any chosen metric (e.g., if ROC AUC fell below an acceptable threshold) and automation scenarios, data scientists can be reassured that the model is working properly and relevant warnings or error messages will be issued if otherwise.
Model comparison in Dataiku Visual ML
It is not a minor detail (whether the author meant it or not) that robots in QualityLand keep asking for a review, as timely retraining of the model on new data is essential for the proper performance.
The use of AI in QualityLand goes in such an obscure direction that the future civilization seemingly loses touch with the meaning of AI and why people employ data analysis and ML. In one of the situations, the doctor tells parents that their child has a high likelihood of becoming a drug addict. Not only is he unable to give any suggestions for parents to improve the situation, but also is not entirely aware of what such result is attributed to.
Model explanations and the ability to run powerful what-if analysis for experts to test different combinations of inputs and review the impact to predicted results is an essential part of developing and using AI in a governed way. ML models are a great tool to help us avoid risks and understand how to improve certain outcomes. Dataiku includes interactive reports for feature importance, partial dependence plots, and individual prediction explanations to empower data scientists and business users to not only explain the results, but also to understand what will happen if one of the features will change.
Powerful what-if analysis allows both technical users and business experts to test different combinations of inputs and review the impact to predicted results. Simulation capabilities even enable teams to systematically uncover and prescribe changes that would lead to the desired business outcomes (that would really work out well for the child, the parents, and the doctor in the book). With the visual interface for what-if analysis, business users can build trust in predictive models and apply knowledge of model behavior in practical ways.
Dataiku generates row-level prediction explanations (ICE and SHAP) to provide additional information for predicted results. Together, these techniques help explain how a model makes decisions and enable data scientists and stakeholders to understand the factors influencing model predictions.
Interactive what-if analysis in Dataiku Visual ML
Maintaining the Human Factor in Machine Learning
In QualityLand, the foolproof algorithms of the biggest, most successful companies can send customers merchandise before they even know they want it. Yet there is a failure in the system and when Peter Jobless receives a wrong item (that he definitely did not have any desire to get), he struggles to find means to return it. Customer service is delegated to machines, and it is seemingly impossible to find a human to resolve an issue and handle an exception.
Kling is a perceptive commentator in our modern world of digital marketing, social media and politics, and where it can get to without a thorough responsibility put on a human in AI. There is a lot of concern that machines are replacing humans, but it should be emphasized that this is not entirely true and really should not be the case.
AI is a powerful tool in the hands of business experts, it empowers taking smarter decisions and running operations more efficiently. Maintaining the human factor in ML, however, is essential to ensure that models make sense and are ethical and consistent with our values. Only human intelligence empowered with tools and solutions can control AI and free ourselves from becoming a QualityLand.