Setting Direction for AI Governance

Scaling AI Jacob Beswick

We've recently published an ebook on AI Governance. Focused on AI regulations, it provides perspective on new rules that will be rolling out across the world that will impact AI developers, users, and the wider public. 

→ Download the Full Ebook Now

Throughout "A Global Look at Emerging Regulatory Frameworks for AI Governance," you’ll read about what countries are planning or publishing on AI regulation. This covers reflections on how AI could and will be regulated, including enforceable laws, their scopes, and procedural requirements (as with the European Commission, or EC) or guidelines that adapt existing laws to AI. But regulation only works well when there is a framing device that clarifies ambition or purpose and this is no different in the context of AI. In general, this means filling in the blanks: regulate in order to protect (against)/prevent/ensure/enable X.  

With respect to AI, the details about the X-factor have created opportunities for debate. After all, knowing what AI regulation and, more broadly AI governance, is intended to achieve will substantially shape what rules and procedures are put in place.This will also help dictate to AI developers and users what exactly they need to do in order to show they are complying with the letter of the law.

Often, we hear about ‘ethical,’ ‘trustworthy,’ and ‘responsible’ AI. These concepts refer to normative frameworks that are loaded, in flux, and responding to real, perceived, and anticipated risks. By ‘normative frameworks,’ I mean values-based frameworks that speak to what we care about and why. And by ‘risks,’ I mean the observed and anticipated harms that arise from the increasing use of AI — against individuals, groups, and businesses. This blog post is a whistle-stop tour of normative frameworks that have been produced by governments or international organizations and is intended to provide some insight into the direction regulations that are discussed in the ebook are taking.

The European Union

One of the most mature participants in the context of AI regulation, the normative framework underpinning the EC’s proposed AI Act focuses on making AI:

1. Safe

2. Legal

3. Trustworthy

Reading the AI Act reveals further components of this framework which speak to or refine the first three. These additional components extend to:

4. Human-centricity

5. Ethicality

6. EU rights (including non-discrimination and privacy) 

7. EU values (including respect for human dignity, freedom, equality, democracy, and the rule of the law)

8. Ensuring accountability, responsibility, and responsible innovation

While the European normative framework extends back to the workings of the High-Level Expert Group’s ‘Ethics guidelines for trustworthy AI’, it’s been refined over time through states’ meetings and consultations. In the HLEG’s “Ethics Guidelines for Trustworthy AI” a helpful illustration clarifies what they argue are seven requirements that enable AI to be trustworthy and that have influenced the AI Act’s underlying normative framework. 

ethical guidelines for AI

Image source

The Europeans have, unlike many other geographies, begun transitioning these values into rules. This means they fill in the blanks above in such a way that regulation of AI should aim to create:

Predictable, proportionate and clear obligations ... to ensure safety and respect of existing legislation protecting fundamental rights throughout the whole AI systems’ lifecycle.

In the EC’s proposed AI Act, specific requirements (discussed in a specific section of the ebook) are clarified and treated as the means by which the above values-based framework will be realized.

But the European framework isn’t the only one on the block and while it will likely shape how other geographies conceptualize AI normative frameworks and AI regulation, there are other framing devices that have and continue to influence rules development. 

Council of Europe

Not to be confused with the EC, the Council of Europe’s membership is wider than the European Union and its origin story is quite different. Having launched CAHAI (the Ad Hoc Committee on Artificial Intelligence), the Council committed to exploring whether and how to implement a legal framework on AI through a feasibility study. The normative framework employed aligns with the Council of Europe’s remit. That is, it is examining the feasibility and potential elements of:

A legal framework for the development, design and application of artificial intelligence, based on the Council of Europe’s standards on human rights, democracy and the rule of law.

The aforementioned feasibility study breaks the three components of this normative framework down further, highlighting some key features of each that could be impacted by AI if not properly regulated.

The OECD Principles for Responsible Stewardship of Trustworthy AI (2019)

Like other normative frameworks, the OECD AI Principles, adopted as a Recommendation, seek to promote the following :

  • AI should benefit people and the planet by driving inclusive growth, sustainable development and well-being.
  • AI systems should be designed in a way that respects the rule of law, human rights, democratic values and diversity, and they should include appropriate safeguards – for example, enabling human intervention where necessary – to ensure a fair and just society.
  • There should be transparency and responsible disclosure around AI systems to ensure that people understand AI-based outcomes and can challenge them.
  • AI systems must function in a robust, secure and safe way throughout their life cycles and potential risks should be continually assessed and managed.
  • Organizations and individuals developing, deploying or operating AI systems should be held accountable for their proper functioning in line with the above principles.

To fulfill this non-legislative intervention, the OECD recommends that its Members, non-Members, and wider AI-actors alike adhere to, promote, and implement these principles without any legal leverage to ensure this materializes. 

At a practical level, the OECD recommends that adherents to the principles implement this framework through national level policy and through international cooperation. Some actions are suggested spanning policy promotion, wide stakeholder engagement, knowledge sharing, and investment. There is no indication as to precise regulatory measures.

UNESCO

The latest to join the global community’s efforts towards developing normative frameworks for AI is United Nations Educational, Scientific and Cultural Organization (UNESCO). UNESCO’s normative framework informs its draft Recommendation on AI Ethics, which prioritizes education, science, culture and communication and information (the areas it's responsible for). To this end, UNESCO has taken the opportunity build a framework that:

Provide(s) a basis to make AI systems work for the good of humanity, individuals, societies and the environment and ecosystems, and to prevent harm. It also aims at stimulating the peaceful use of IA systems.

However, unlike the European Union and Council of Europe, and similar to the OECD, UNESCO’s work on AI does not aspire to create legal instruments directly. With that said, it has the widest membership across the organizations discussed and has the explicit ambition to make this a ‘universal framework of values, principles and actions’ that can guide states globally in their domestic approach to the regulation or governance of AI. 

So what are the components of UNESCO’s normative framework? The values they mention include:

  • Respect, protection and promotion of human rights and fundamental freedoms and human dignity
  • Environmental and ecosystem flourishing
  • Ensuring diversity and inclusiveness
  • Living in peaceful, just and interconnected societies

The principles they mention include:

  • Proportionality and Do No Harm
  • Safety and security
  • Fairness and non-discrimination
  • Sustainability
  • Right to Privacy and data protection
  • Human oversight and determination
  • Transparency and explainability
  • Responsibility and accountability
  • Awareness and literacy
  • Multistakeholder and adaptive governance and collaboration

In an ideal scenario, this means creating a potential world where: 

Ethical values and principles can help develop and implement rights-based policy measures and legal norms, by providing guidance with a view to the fast pace of technological development.

Commonalities, Differences, and Why This All Matters

At the global level, AI is seen as a potential socio-economic non sequitur: an opportunity to pivot from the ‘old ways’ of tedium, inefficiency, and human limitations to one where work can be done better, faster, and more efficiently. 

That being said, as the normative frameworks mentioned here indicate, that non sequitur does not guarantee outcomes that benefit everyone everywhere equally. As such, the perceived and anticipated risks that AI poses have given public and private actors globally reason to think about what and how they want to ensure their development and use of artificial intelligence materializes. For states and international bodies, this is generally about promoting the well-being, safety, and humanity of publics who are not necessarily guaranteed protections as a matter of course.

The concerns about AI are broadly shared across the aforementioned normative frameworks, motivated by common questions like:

What are the risks of using AI?

Who could be harmed?

How will AI impact our shared world? 

What can we do about it?

Ultimately, as discussed in the ebook, different states or collections of states are asking and answering all of these questions in turn. And while the normative frameworks at play in large part speak the same language, the ways in which they are operationalized, translated into law, and into new processes may look different. This, in fact, was noted by the Council of Europe’s CAHAI in their Feasibility Study:

This mapping [of 116 documents on ‘ethical AI’ from private companies, academic, and public-sector organizations, primarily developed in Europe North America and Asia] revealed that current AI ethics guidelines converge on some generic principles, but — to the extent they give practical guidance — they tend to sharply disagree over the details of what should be done in practice.

Time will tell how AI regulations in practice meet their intended ambitions and how key international players will adapt as the technologies mature. For the time being, it’s enough to know that major markets are moving in the direction of creating new requirements based on the normative frameworks outlined here. What these requirements look like will ultimately inform how individual companies operationalize AI governance — not only for their own business-level priorities but also to meet wider, external requirements. 

At Dataiku, we look at AI governance as an opportunity to build resilience by developing an operational model to let AI grow organically, eliminate silos between teams, and have a high-level view of what is going on. "A Global Look at Emerging Regulatory Frameworks for AI Governance," illustrates the wider world and how external requirements are evolving. Insights into external requirements are helpful, but providing you capabilities and guidance on good AI governance that reflect those external requirements and meet your business’s particular needs is essential. We’re doing just that. Get in touch to discuss how we can support you on governing AI.

You May Also Like

Maximizing Text Generation Techniques

Read More

Looking Ahead: AI Hurdles IT Leaders Need to Overcome in 2025

Read More

Unpacking 3 of the Biggest Controversies in AI Today

Read More

4 Strategies Every CIO Needs to Succeed With GenAI

Read More