Is Artificial Intelligence a Threat?

Scaling AI Rose Wijnberg

Since the beginning of time, people have built machines to do things that humans can’t, and artificial intelligence (AI) is no exception. But even though AI has been with us for decades — the earliest successful AI program was written in 1951 — today, people are still (rightfully) worried about some of the implications. What will it take to change the perception that AI is a threat and to ensure we’re building it in a way that’s fair, legal, and safe?

The latest episode of AI & Us, the new web series by Dataiku that explores how AI is changing our everyday lives, explores exactly this question. In order to answer it, it’s important to first take a step back and understand exactly what AI is (and isn’t) capable of today. 

We’re a Long Way from Artificial General Intelligence

What AI is good at is three things: Prediction, imitation, and optimization. The reality is that we’re a long way away from what’s known as artificial general intelligence, or AGI, meaning that AI could make decisions on its own or complete all the tasks a human could.

 

Here at Dataiku, we don’t believe that AI is inherently evil or an imminent threat to mankind. That’s not to say, though, that the technology is risk-free. Said differently, we (as the developers and consumers of AI tools and systems) are more of a risk to ourselves if we don’t exercise caution in how we go about our AI experiments and projects (versus, let’s say, killer robots taking over the world). 

But That Doesn’t Mean We Don’t Need Guardrails

When building AI systems at scale, organizations have a responsibility to enable and train their users on the technology at hand. To help put guardrails in place to prevent these risks, organizations need to implement a Responsible AI strategy — an inclusive approach to designing, building, and deploying AI that is in line with stated intentions. Practitioners must be aware of the historical and social context in which they build pipelines, and use that knowledge to inform more equitable data science applications. 

"The risk doesn’t come from machines suddenly developing spontaneous malevolent consciousness … The problem isn’t consciousness, but competence. You make machines that are incredibly competent at achieving objectives and they will cause accidents in trying to achieve those objectives." 

-Stuart Russell, Computer Scientist and AI Pioneer

The onus is on organizations to define a precise framework of their ethical rules that should — or should not — be followed. This ensures that the company takes a clear position on all of its principles and facilitates communication of these principles among and across all teams. Here are some common foundations to a sustainable Responsible AI strategy:

  1. Intentionality: Ensuring that models are designed and behave in ways aligned with their purpose
  2. Explainability: Under the intentionality umbrella, explainable AI means that the results of AI systems should be explainable by humans and not just the ones that created the system
  3. Accountability: Having a centralized place to seamlessly view which teams are using what data, how, and in which models (closely tied to traceability) 

    More widely, Responsible AI fits within an AI governance framework, a requirement for any organization aiming to build efficient, compliant, consistent, and repeatable practices when scaling AI. For example, AI governance policies will document how AI projects should be managed throughout their lifecycle (what we know as MLOps) and what specific risks should be addressed and how (Responsible AI). In order to create human-centered AI grounded in explainability, responsibility, and governance (and eliminate any concerns about an AI-initiated “Doomsday”), organizations need to:

    • Provide interpretability for internal stakeholders
    • Test for biases in their data and models
    • Document their decisions 
    • Ensure models can be explained so organizations can accurately identify if something is wrong, causes harm, or involves risk of harm
    • Create a data culture of transparency and a diversification of thought
    • Establish a governance framework for data and AI

You May Also Like

Talking AI Democratization With Dr. Anastassia Lauterbach

Read More

6 Top-of-Mind Topics About AI & Trust in 2024

Read More

3 Concrete Ways to Drive AI ROI

Read More

Alteryx to Dataiku: The Visual Flow

Read More