A European Perspective: AI and Liability

Scaling AI Jacob Beswick

Since summer 2022, Europe, the U.S., and the U.K. have delivered new policy and regulatory proposals concerning AI spanning AI and liability, principles and practices to shape AI development and deployment, and a prospective approach to AI regulation, respectively. 

This blog talks about the European Commission’s publication of a proposed AI Liability Directive, which sets out expectations for individuals and organizations where defects or faults in products and services that leverage AI cause harm and may be raised by individuals as a liability claim.

Why this matters: In laying this foundation, the European Commission’s proposal sets expectations for organizations that leverage AI in consumer-facing products and services. In turn, this signaling is a useful impetus for developing and implementing strong AI Governance practices. 

Liability and AI in Europe

With several reports concerning AI and liability published in 2020, in 2021 the European Commission published a public consultation that indicated that the Product Liability Directive 1985 wasn’t suited to new technologies like AI. In late September 2022, the Commission released a proposed directive targeting the ‘harmonization of national liability rules for AI, making it easier for victims of AI-related damage to get compensation.’ In terms of scope, the proposed Directive is concerned with establishing common rules on non-contractual fault-based civil law claims (aka harms to individuals caused by AI systems due to a defect). Some key features of the Directive include:

  1. The disclosure of evidence on high-risk AI systems to enable a claimant to substantiate a claim
  2. Expectations and terms around the information that is disclosed
  3. And the burden of proof where a claim is brought to national courts for damages

The terms of how liability claims are made by individuals will matter to companies with consumer-facing products and services leveraging AI. At this stage, understanding what is expected of companies and how these expectations align to the proposed AI Act may help organizations prepare best practices that support future compliance with both the Act and the Directive. 

An important caveat: As I am not a specialist in liability law or policy, the blog will best serve readers looking to understand the why, the what, but not so much the how (or the legal mechanisms at play). 

The Why:

By harmonizing these rules across the EU, there is a dual-pronged strategic goal: Ensure consistency of individuals’ rights and abilities to make claims as well as consistency in terms of expectations across businesses operating within the EU. With respect to the former, 2021’s public consultation on AI and liability revealed challenging areas such as the ability of individuals to have sufficient information about AI systems so that they could substantiate a claim.

This Directive is an important step in building out a coherent regulatory framework in Europe, builds on the proposed AI Act, and seeks to support the strategic aims of the Coordinated Action Plan on AI concerning public trust and therefore uptake of AI. While the AI Act’s product safety approach to regulation creates a risk-based system that associates new requirements with high-risk categories of AI, it does not address individuals’ rights in instances of damages or harm. The proposed AI Liability Directive fills this gap. 

The What:

And so the AI Liability Directive picks up where the AI Act’s risk-based approach fails to eradicate risks and harms in practice. In doing so, it can be seen as an enforcement mechanism for the requirements associated with high-risk AI systems found in the AI Act. This relationship becomes more profound when reviewing Articles 3 and 4, which commentators point out as rattling some organizations leveraging AI. 

In short: With respect to disclosure of evidence and burden of proof, the Directive sets out expectations on when, how, and what information should be shared where an individual has made a claim. Paraphrasing portions of Article 3: Disclosure of evidence and rebuttable presumption of non-compliance:

  1. Where an individual making a claim has tried to gather evidence from an organization but failed to receive it, a court can intervene to demand that evidence
  2. To make that intervention, an individual must present facts and evidence supporting their claim
  3. Where such a demand is made by a court, there will be some safeguards with respect to an organization’s privileged information as well as limitations around evidence that is necessary and proportionate to the claim made

These interventions address questions raised in the 2021 consultation. This includes the challenges posed by some AI systems, and the “black box effect” in particular, which was identified as making it difficult for an individual to prove fault and causality (aka that the harm they experienced was because of the AI system).

The (Superficial) How:

In Article 4: Rebuttable presumption of causal link in the case of fault, scenarios are explored where the fault of a defendant organization can be determined. Two of these scenarios resonate with the strategic outcomes of this Directive, namely, to build a coherent regulatory ecosystem with respect to AI in Europe. 

One of these two scenarios refers to when the court can presume the fault of the defendant organization when they do not comply with a court order for evidence to be disclosed (as above). 

The other scenario refers to when an organization’s high-risk AI system (as determined by the AI Act) has been deployed in full compliance with their obligations set out in the AI Act. In this scenario, the defendant can demonstrate that sufficient evidence and expertise is reasonably accessible for an individual to prove the claim they have made. This scenario is viewed by the Commission as an incentive to comply with the AI Act once it is enforced.

Where non-high risk AI systems (recall these do not face the same requirements as high-risk systems under the AI Act) are concerned, if an individual finds that it is exceedingly difficult to prove the relationship between the system’s malfunction and the harm claimed, then a court may presume a causal relationship between the two.

Reflections

At the time of the AI Act’s publication, there were some corners of the wider AI ecosystem asking why liability was omitted. Despite this, we can take from the proposed Directive that the European Commission is exploring how to reach its strategic outcomes (discussed at the outset) while meaningfully empowering individuals to substantiate liability claims. 

From the perspective of organizations leveraging consumer-facing AI, take note (better yet, ask your legal teams to take note). The proposals here demonstrate the potential future powers of national courts to demand critical information about AI systems to investigate claims made by individuals. According to the Directive, compliance with the AI Act’s high-risk requirements is expected to set organizations up to have that information readily available. 

For organizations whose customer-facing AI does not command compliance with the high-risk requirements, questions may be asked internally as to whether risks of harm to your customers are sufficient that they would warrant compliance with the high-risk requirements solely for hedging bets. Alternatively, those questions may focus on what good AI Governance looks like in general with a view of how to ensure any future claims made under the Directive’s purview can be fulfilled.

You May Also Like

Explainable AI in Practice (In Plain English!)

Read More

Democratizing Access to AI: SLB and Deloitte

Read More

Secure and Scalable Enterprise AI: TitanML & the Dataiku LLM Mesh

Read More

Revolutionizing Renault: AI's Impact on Supply Chain Efficiency

Read More