3 Steps Toward More Ethical AI

Dataiku Product, Scaling AI Romain Fouache

The risks of the use of AI have long remained limited, as until recently, AI was confined to academics or research and development. But over the last few years, the rise of this technology and its practical applications in business as well as in the daily lives of citizens means a much more real — and much bigger — impact. It’s no wonder, then, that questions of AI ethics have come up.

balance top

Interestingly, AI sometimes poses ethical issues because it is so efficient and precise (for example, identifying individuals using facial recognition in public and judging their behavior based on associated personal information). Other times, it’s because it’s not precise enough (e.g., recidivism risk predictions).

So is ethical AI a myth? It’s true that ethics themselves are subjective (after all, they cannot be quantitatively measured, as illustrated by a recent study published in Nature on how cultures differ in their perception of an ethical AI decision).

On top of that, ethics is also a controversial subject often up for debate, and it’s one that mixes all disciplines — philosophy, science, social science, technology, legal, etc. — clearly, it’s not possible to come up with a simple formula. So if universal AI ethics doesn’t exist, how can companies address the subject?

Step 1: Define the Ethics of the Organization

The first step for an organization determined to deploy AI is defining a precise framework of their own company’s ethical rules that should — or should not be — followed. For example, the risk of gender bias might not be a problem for a retail clothing recommendation engine, but it’s another story for financial institutions.

Defining these criteria in line with the work and values ​​of the company brings two important benefits:

  1. It ensures that the company takes a clear position on all of its principles.
  2. It facilitates communication of these principals among and across all teams.

Step 2: Accountability & Empowerment

If there is one thing that increases the risk of AI, it is the perception that AI is intrinsically objective; that is, that its recommendations, forecasts, or whatever the output isn’t subject to individuals’ biases. If this were the case, then the question of ethics would be irrelevant to AI — an algorithm would simply be an indisputable representation of reality.

This misconception is extremely dangerous not only because it is false, but also because it tends to create a false sense of comfort, diluting team and individual responsibility when it comes to AI projects. Indeed, algorithms present many risks for bias, including:

  • The choice of data. AI is built using pre-existing data, and choosing this data has a fundamental impact on the system’s behavior. For example, Amazon’s facial recognition algorithm had trouble recognizing women and non-white people because white men were over-represented in the data used for its creation. Seeing as the system only learned to distinguish between white men, it couldn’t possibly properly address other faces.
  • Pre-existing biases in data. Datasets frequently contain bias themselves; for example, the system that automatically eliminated women during the recruitment process is largely linked to past biases in the recruiting process. Indeed, if the data being used is recruiters’ past choices, and those choices were biased, the AI ​​will have the same faults.
  • The choice of modeling techniques and validation. Each step of creating an AI can induce bias and drift.

The answer to addressing all of these risks is both human and technical. Human because it is essential to educate and empower teams developing AI systems. In practice, people who develop and deploy a model must be aware of its potential shortcomings and bear the responsibility for its possible faults. At Dataiku, for example, we offer training for people on this subject.

There are also, of course, technological ways to limit these risks by explicitly analyzing if there are biases on various subpopulations. But it’s critical to recognize that successfully empowering teams (both with training and tools) depend heavily on step one, defining the ethics of the organization.

Step 3: AI Governance, Aligned with Organizational Ethics

The third step is the establishment of a real AI governance within companies. Today, a large percentage of organizations that have gotten started with data science have AI governance systems that have developed semi-organically. It is not uncommon to see several, even dozens, of teams within a large group develop different AI systems, each using different technologies and data.

Once deployed, models are monitored individually by their owners.

Yet governance at scale means monitoring all projects centrally, being able to see what data is used where at a glance, and how these models are performing. Establishing this level of governance based on the ethical principles defined in step one and supported by empowered and accountable teams in step two is the best way to guarantee responsible AI.

Ultimately, the question of ethics must be addressed by those undertaking AI efforts. Though it can’t be quantitatively measured, we at Dataiku still believe that it is critical to ensure actions taken by AI systems respect ethical rules consistent with their environments. As a supplier of an AI solution, we leave it to our customers to define and develop their own AI frameworks, but at the same time, we provide the technology and tools to ensure governance and foundations for a more ethical AI future.

You May Also Like

5 New Dataiku Features to Streamline Your RAG Pipelines

Read More

Dataiku Is a Gartner Peer Insights Customers’ Choice

Read More

2025 Retail & CPG Trends: Hyper-Personalization, GenAI, & More!

Read More

Keep Track of All Your Models (Including LLMs) With Dataiku

Read More