AI Regulation in the UK: Striking a Balance

Scaling AI Jacob Beswick

In late March 2023, the U.K. government released its anticipated AI regulation policy paper, “A Pro-Innovation Approach to AI Regulation” and an associated call for feedback.

Joining the ranks of peers near and far, the policy paper elucidates a vision for AI regulation that is at once “pragmatic [and] proportionate” and that addresses “AI-specific risks [with the] potential to cause harm.” In the balance, efforts are made to secure public trust in AI while establishing clarity and coherence to the regulatory landscape, promoting responsible innovation, and meeting priorities around growth and prosperity. Achieving these outcomes is no easy task and, indeed, some of the unique proposals beg questions about the feasibility of their implementation and what the proposals could mean for organizations operating within the U.K. 

United Kingdom

A Principles- and Context-Based Approach

The means to achieving these ends is, fundamentally, a principles-based and context-specific approach to regulation. What this means practically is that principles (detailed later on) will help to set government and regulators’ expectations on what is permissible and inform good AI governance on the one hand. On the other hand, these expectations will be further specified with respect to a given use case’s application in a given industry, a given function, a given context, and its expected impact. 

Otherwise said, the U.K.’s approach is about “regulating the use, not the technology” while considering the value or opportunity of a use case in relation to the risks posed and what proportionate measures are necessary to address the identified risks.

In theory, this differs from the European Union’s approach to risk-tiering which sets out specific domains (e.g., “Employment, workers management, and access to self-employment”) and sub-domains (e.g., “AI systems to be used for recruitment or selection of natural persons, notably for advertising vacancies, screening or filtering applications, evaluating candidates in the course of interview or tests”) and applying the same risk tiering and requirements to all use cases that fall into it.

The paper sets out a multi-part intervention that spans a principle-based framework to a series of pragmatic interventions which delineate responsibilities of regulators and resources available to them and to relevant AI actors. 

The Why of It All: Recognizing a Common Set of Priorities

But before diving into the aspects of the proposed intervention, it’s worth centering attention on the why of it all. This part of the story isn’t new, but it is important. Laying the foundation for the proposal is a recognition that AI-specific risks have the potential to adversely impact values including “safety, security, fairness, privacy and agency, human rights, societal well-being and prosperity.” These values resonate with those articulated elsewhere — whether the European Union, United States, Singapore, Canada, or the OECD. At the very least, this starting point should give individuals, groups, and organizations some confidence in the growing global coherence on the why of AI regulation and governance. 

Understanding the Proposals

With the paper’s ambitions in mind, the interventions (or changes proposed) can be distributed into four main buckets:

1. A clarification of five (initially) non-statutory principles that will help to inform how regulators assess, prioritize, and potentially intervene in markets. 

They include:

  • Safety, security, and robustness
  • Appropriate transparency and explainability
  • Fairness
  • Accountability and governance
  • Contestability and redress

2. A delineation of responsibilities at the regulator level, rather than creating a new regulator to take charge of AI. It also leans away from a non-regulatory approach which would entrust organizations to adhere to the aforementioned principles without oversight. 

Significantly, and in contrast to its European neighbors, the U.K. is looking to make existing regulators responsible for ensuring these principles are effectively implemented in their respective purview (financial services and insurance for the FCA, medical devices for the MHRA, and so on).

Acknowledging that AI is currently regulated by existing legal frameworks, the paper recognizes that there are nonetheless some AI risks that could arise in the gaps between regulatory remits. The third intervention below is listed as one means of addressing this. Without a means of dealing with such gaps, they could in turn create opportunities for risks to go unmitigated, and harms to become realized which could adversely affect the public and the potential positive impacts of AI.

3. Addressing incoherence and gaps while maintaining market awareness in the interest of a functional regulatory approach, the government-housed centralized function is designed to bring coherence to this proposed regulatory framework and more consistent expectations with respect to compliance.

4. Recognizing the UK Government’s work on AI Assurance as a non-regulatory means for supporting responsible innovation, the paper proposes that AI Assurance techniques, technical standards, and sandboxes and testbeds are relevant tools or resources for which the aforementioned principles and regulatory compliance can be upheld. 

In addition to the above, the paper raises some relevant and important themes. One of these is the idea of “foundation models” or those where “general purpose AI that are trained on vast quantities of data and can be adapted to a wide range of tasks.” LLMs (like ChatGPT) are an example and the paper recognizes that their development and distribution approaches are worthy of particular consideration in the context of the regulatory framework outlined.

Wrapping Up: Reflections on the U.K.’s Policy Paper

While this blog is at once an attempt to give the headlines of the U.K.’s proposal on AI regulation, it’s also an opportunity to offer some considered reflection. Looking to their nearest neighbors for comparison, where the European’s have, in their words, embraced a “risk-based approach,” the British are proposing one that is “context-based [and] proportionate…balancing the real risks against the opportunities and benefits that AI can generate.”

My reading is that these are thematically linked, while the U.K. appears to have a risk tolerance that tracks with the potential opportunity of a given use case. The European Union’s approach reads as fixed or less flexible, the British as more fluid. How this could play out, time will tell. But from the perspective of someone who is working closely with organizations around the world to build out AI Governance practices leveraging Dataiku Govern — often to support compliance activities — understanding regulatory requirements matters, it sets consistent expectations and it means investments can more easily transform into innovations and value. With that, three high-level reflections:

1. The dedication to adaptability of how regulation impacts the use of AI feels pragmatic and sensible. 

2. However, in a world where AI is scaling within and across organizations, this pragmatism might be confronted by resource constraints on the side of the public sector and regulators. And if this is the case, it’s hard to anticipate what this will mean for U.K.-based organizations leveraging AI. It leaves open the question: Does this approach have implications for time to value?

3. And further to this point, if the adaptability of regulatory interventions (which depends on use cases, contexts of their application, and potential benefits or impacts) creates a high level of diversity in what compliance requirements looks like, I can’t help but wonder how, day-to-day, organizations’ data science, business teams, and compliance teams will cope. This draws on the point made in the paper regarding “regulatory incoherence” and, the proposed solution of better coordination, feels like it could risk being peppered with administrative burden.

You May Also Like

The Ultimate Test of ChatGPT

Read More

Maximize GenAI Impact in 2025 With Strategy and Spend Tips

Read More

Maximizing Text Generation Techniques

Read More

Looking Ahead: AI Hurdles IT Leaders Need to Overcome in 2025

Read More