The Question of AI & Ethics

Scaling AI Robert Jett

It’s the dystopian backdrop to every science fiction novel involving artificial intelligence: a dark and stormy police state in which cold, unfeeling robots patrol for nonconforming humans. While we at Dataiku are optimistic that we’re a bit away from that bleak future, 2018 has been a year riddled with ethical questions about the effect artificial intelligence will have on our lives.cute white robot face Whether it be Cambridge Analytica’s use of Facebook data to influence elections or Google’s involvement with the Department of Defense’s drone project, data has had it’s fair share of scrutiny in the media lately. AI is the trendy buzzword of 2018 that has the unique ability to inspire wonder and to generate intense discomfort among people.

Technological advancement is often a tense game between progress and pragmatism, in which the needs of the users must be weighed against the implications of its use. Generally speaking, there are two major areas of concern when it comes to the ethics of artificial intelligence: human replacement and amorality.

Replacement or Enhancement?

As the aptly named willrobotstakemyjob.com can tell you, the idea that AI is going to “take your job” is very popular and very scary to a lot of people. In discussions about the ethics of AI, this question of job loss and human replacement almost always mentioned. What are we to do in a world in which computers are able to do all of our jobs better than we can?

The reality, however, is that the direction AI has headed in over recent years is much more aligned with a narrative of enhancement as opposed to absolute replacement. The vast majority of AI use cases involve cutting down repetitive tasks that have high time costs. This includes things from sorting through thousands of stocks to make buying/selling decisions to looking through long lists of transactions for signs of fraud (both of which are not extremely complicated processes, but very time-consuming).

young man working on a laptop sitting at a wooden table with plants and vases on it and brick wall in backgroundAI is mostly useful for automating the boring stuff that businesses would be better off not spending money having humans do. 

Yes, there may be some professions that AI will make less necessary (and therefore less hireable). At the core of Enterprise AI is the idea of progression away from expensive and unnecessary processes in favor of smartly optimized ones. In the same way that people even 30 years ago couldn’t have imagined jobs like “Data Scientist” or “IoT Solution Architect,” this process of optimizing will almost definitely create a new set of professions in which the use of AI is integrated from the start.

Amorality

The second, and perhaps more troubling implication of artificial intelligence is its necessary amorality. It is important to make clear the difference between amorality and immorality here. AI will likely not launch a Terminator-esque takeover of the world on its own. It’s the humans who are programming these robots, with their uniquely human biases and motives, that are much scarier.

A particularly popular example of this fact comes in the heavily-studied field of racial and gender bias in artificial intelligence. Neural networks that use non-representative or discriminatory input materials have no choice but to internalize the biases of the data being used. This has shown up in facial recognition software, image-recognition and association, natural language processing, crime predictions and hosts of other areas. Because AI is seen as such a black-box and opaque technology, many of these problems occur completely unintentionally.


Arnold Schwarzenegger in Terminator pointing a gun and saying hasta la vistaIt's unlikely we'll find ourselves in a world run by tyrannical AI any time soon. 

In the same way that Gmail is not to blame for the passive-aggressive email your boss sent you yesterday, AI cannot be to blame for questionable choices made by those who are behind their use, even if those choices were made unintentionally. The solution to this problem can only occur when teams from across industries and backgrounds are able to properly use tools to integrate AI into their processes. When AI is democratized, it necessarily becomes more equitable.

Well, What Now…

Although these things might seem scary, it is important to recognize that the benefits of AI will almost certainly outweigh the scary things. AI will help doctors detect cancers faster than ever before. It’ll help packages get to your door sooner and will revolutionize medicine. 

The fear of artificial intelligence doesn’t come from the technology itself, it comes from its perception as this far-away, super-technology that is reserved for PhDs at Google and computer scientists at MIT. It feels exclusive and elitist and so many, many people already feel left behind.

The truth, however, is that machine learning and AI are becoming more and more democratized. The ethical use of AI is not some top-down thing that will happen because a tech-giant says it will — it will come from real people working together to make the technology work for the betterment of everyone else.

You May Also Like

The Ultimate Test of ChatGPT

Read More

Maximize GenAI Impact in 2025 With Strategy and Spend Tips

Read More

Maximizing Text Generation Techniques

Read More

Looking Ahead: AI Hurdles IT Leaders Need to Overcome in 2025

Read More