When I very enthusiastically joined Dataiku over three years ago, I was full of hope in technology, the startup world, and empowering people with data.
So enthusiastic actually that I decided to share my hope in all things data with the world and launch a newsletter which grew to become Banana Data News. At the time, most articles on AI were about how fun machine learning is, amazing data use cases, and cute data visualization projects. There were already a few detractors of course, amongst them Elon Musk and Stephen Hawking, but their critiques were still very conceptual.
Then, I progressively started to spot dark clouds. A few articles started coming out that went against the great tech future that I envisioned. And then more and more of them came out.
While we had been hearing abstract fears of AI for years, what I was starting to see were actual use cases where AI had gone seriously wrong. And even use cases where AI had done what it was supposed to and still managed to go terribly wrong.
See some of my favorite scary AI projects here.
AI was being used to send the wrong people back to jail, to manipulate people into voting for extremist parties, recognizing homosexuals based on their supposed facial features, or to refuse women jobs. These use cases are showing up more and more often.
Moreover, they are no longer limited to specialist data science media and papers, but are making it to the mainstream media. As a result, I started slowly losing faith in technology and its backbone, artificial intelligence.
The Thing is Though, I Work for an AI Company
In the same time, Dataiku grew from a data science software startup to an AI company with over 200 people around the world. I mean, it says it right there on the home page — Your Path to Enterprise AI.
And while a few years back people made awkward blank faces when I talked about my job, recently, I’m getting many more shocked faces at the thought that I’m contributing to such destructive technology.
So how can I live with myself and not fear that I’m enabling people to do more horrible things? Well, to figure out why I didn’t just quit my job the second I started putting together this article, let’s start by identifying what makes me — and probably you at this point — distrust AI.
Why We Fear Artificial Intelligence
It’s important to look into what elements constitute this fear. At their roots, the things that scare us about artificial intelligence scare us for a set of actually very different reasons.
1. Automation Is Changing Jobs and Impacting the Less Qualified
This is the oldest fear in the book: automation is taking away jobs.
Countless reports show that automation will lead to a massive number of people losing their jobs. For example, by 2020 McKinsey estimated 73 million U.S. jobs could be destroyed.
This fear was also a driving factor of the first and second industrial revolutions. When a society transitions from a means of production to another, in the first phase of the transition many people lose their jobs — since these are made obsolete by the new production means. When the first machines were introduced in factories, many craftsmen lost their jobs.
In the second phase of an industrial revolution though, the new machines create more jobs — in this case, people building, maintaining, and working on the machines. In that phase, the GDP of the country goes up.
Today, many jobs are already being lost to automation, in Amazon warehouses for example (see below). But we’re still not seeing any raise in GDP — this could lead to the conclusion that we are still in the first phase of the current revolution — but leading top economists are worried we are facing a different type of industrial revolution.
2. Technology is Reinforcing Bias & Inequalities
This is maybe the more pressing issue with AI today. Indeed the data that goes into AI algorithms is a representation of our world, and our world is deeply biased against many different groups. But the AI doesn’t know about the bias. So when we train artificial intelligence on past data, the tech takes the bias as a decision to reproduce and not something to correct.
So we’re creating biased algorithms.
Moreover, the field of machine learning and data science today is dominated by white males who don’t always think to check and test for bias. This was made evident when several people pointed out that many facial recognition technologies don’t work on black features.
So as a citizen, a person, a consumer, a woman, and even a worker, I am worried, and I'm not alone. Especially since the next reason below makes it much worse.
3. Algorithms are Making Decisions That Impact us Without us Being Aware of it
Algorithms make a lot of decisions that impact our society without us knowing. For example, algorithms decide which prisoners are most likely to commit crimes once they are released into society, but lawyers and prisoners don't have transparency into this process. In China, the government uses data for a lot of things, like finding criminals outside pop concerts using facial recognition and surveillance.
And who knows how many more official decisions could be made automatically without our knowing?
More worrisome yet, governments aren’t the only ones today to have that level of impact on people and businesses. Tech companies control an incredible amount of the content we consume and the way we see and interact with the world around us.
Facebook released research that shows that a person’s happiness level was impacted by the content that they saw in their Facebook feed. Interesting. But if you push the reasoning a little bit, this means that since the Facebook algorithm controls what we see in our feed, they had to decide on how happy they wanted to make their users. And do we think that they decided to go with the maximum level of happiness? Or maybe they went for a level that assured that people were just unhappy enough to keep scrolling?
Think of another area of our lives that AI will soon largely control: our streets and cars. While self-driving car technology is being thoroughly tested and highly controlled, the algorithms used by Facebook or Google were never tested in a similar way, and recent news stories have shown that not even their makers really understand how they work — and their impact on our lives is just as major!
4. Algorithms Are Taking Away What Makes Us Human
This is a more subtle fear, quite an occidental centric one as well. This is the fear that algorithms are doing exactly what they are built for:
- creating seamless experiences for us,
- taking away the stress of decision making from us by pushing content we’ll like,
- and making our comfort zone that much more comfortable and protected from outside interference.
These algorithms are creating rabbit holes of awareness that cut us off from the world. They lead to situations where we are unaware of the world outside of people like us, and more vulnerable to manipulation. It's been in the news so much with Brexit, or the U.S. presidential elections that we've become all too aware.
What these algorithms are doing is also potentially taking away part of what makes us human. Our right to make decisions is one of those things — exercising free will, even if it is to chose tonight's movie, is part of what makes us human. Algorithms that are built to recommend what we truly want without us having to come to that conclusion alone are taking away a bit of our humanity, even if it is tiny, and even if we're quite willing to let them have it.
I also believe that part of what makes us human is our ability to thrive in moments of in between — let our minds wander while we stand in line, have the next great idea while sitting on the subway, and meeting someone new while waiting for a movie to start. Algorithms are targeting these moments of "emptiness" to keep us busy, trying to keep us entertained, always. Their biggest opportunity of growth is those times that we have "available." By creating algorithms that keep us constantly occupied and never bored, consuming content they know we'll like, we're creating a word that is quite dark, at least in my opinion.
5. The Robot Uprising Is Coming
Let's be honest: we're kind of playing god with something we don’t know enough about, and in the long term, the probability that we will lose control of artificial intelligence, and that robots will make slaves of us, or keep us in nice robot zoos if we’re lucky, is very likely.
I mean, that’s really just a fact, ask Will Smith.
So there you have it — all of the things that scare us about artificial intelligence. But I for one believe there's hope, and will get right on writing an article on exactly why.