Is the Path to Governed AI the New Zelda Quest?

Dataiku Company, Scaling AI Sophie Dionnet

The task of scaling AI takes many shapes and forms, and triggers multiple questions as companies mature their set-up. One of these structuring questions relates to the need to balance action with control: we all know that AI comes with its own set of risks and that managing these through enforced principles protecting both companies and end-beneficiaries is a must. While doing so, companies need to ensure they do not slow down their AI development in order to reap the full benefits for their strategic development. 

Wait — wasn’t this blog post supposed to be about Zelda? What is the link between such a serious matter and the quest of a young boy in green outfit to save a princess? It all has to do with the “triforce”…

While the adventures of brave Link have continuously renewed themselves, taking us from driving trains to playing an ocarina or sailing boats and flying thanks to chickens, a continued pattern has remained: Link always needed to master three key competencies to be in a position to overcome whatever evil was impacting Hyrule — and thus free princess Zelda. 

Close your eyes and think back: I am sure you can visualize the triforce, formed by 3 triangles, tightly linked together. As a Zelda player, you have spent hundreds of hours collecting new weapons, learning new tricks, and overcoming scary monsters to get to this stage of fulfillment: the triforce. 

triforce logo

 

Now Zelda is certainly not the only game to wrestle with the image of triangles and what they represent: the notion of balance, of reconciliation, of leveraging diversified skills to handle a complex matter. But in the quest of young Link and his need to upskill himself, confront new hazards, and balance perspectives, I find a certain allegory for the current quest for AI Governance

And it all starts with this symbol: our belief at Dataiku is that the ambition for AI Governance can’t be separated from having the right operational foundations — also referred to as MLOps — nor from possessing a clear set of principles, commonly called Responsible AI. Altogether, they form a triangle of triangles which are tightly connected and need to be envisaged jointly to deliver the ultimate objective lying with them: safe AI scaling. 

AI triforce diagram

Hold on — so now we’re moving back from Zelda to AI Governance, a “serious” topic having to do with the notion of risk and control? Well, maybe the quest that many companies have embarked on, when it comes to AI, is not so different from the one Link undertakes game after game. 

Let’s imagine for one minute that Link has dropped his green shirt to become a new modern hero: a company on an AI scaling journey. While hopefully not triggered by such tragic events as the kidnapping of a princess, all companies embark on AI journeys for strong compelling reasons: costs, risk reduction, and growth. And at every stage, they evince a strong need for success. 

As they do so, they embark on a path which will lead them to mastering new tools — machine learning models, neural networks, NLP technics —, upskilling, understanding business outcomes, and more. More importantly, they will have to ensure that none of their work sits on a shelf, unused. The primary goal is to put everything to work, making of MLOps the first triangle to master. 

Just as importantly, these companies will have to ensure that all the projects which now run smoothly into production correspond to their goals and principles, and that all unwanted harms are identified, tackled, and balanced against their impact. Thus Responsible AI becomes the second cornerstone of their journey. 

And finally, the AI-curious will have to orchestrate the entirety of this journey at the right scale: articulating principles, enforcing processes, managing controls, and much else.

The journey to Governed AI is far from being a simple one, and it will take a while for each company to fully master its AI triforce. But remember your Zelda days: finishing the quest is no individual journey. In a game like Zelda, you lean on combining strengths and learning from others through forums (“But how did you get past this evil troll in the 3rd dungeon???”). 

As the materiality for AI builds up, so does the number of public and private initiatives emerging to learn from each other, create rules, frameworks and labels, of which the EU AI Act regulation is just one example. Last week we were proud to host a full-day workshop on AI governance with over 10 customers and prospects in Netherlands. Today we continue to discuss how we can accelerate the application of a new AI certification label with the several companies which developed it. 

Is the path to Governed AI the new Zelda Quest? Maybe. One thing for sure: Dataiku is proud to be part of it.

 

You May Also Like

6 Top-of-Mind Topics About AI & Trust in 2024

Read More

3 Concrete Ways to Drive AI ROI

Read More

Alteryx to Dataiku: The Visual Flow

Read More

Introducing LLM Cost Guard, the Newest Addition to the LLM Mesh

Read More