The World of AI Is Shifting, Even in the US

Dataiku Product, Scaling AI Paul-Marie Carfantan

As with all other key aspects of the economy, the global health crisis we are going through is having a deep impact on AI developments in organizations. COVID-19 acts as a catalyst for reinforced data usage: many companies need to develop a strong data-supported understanding of the new normal — and react accordingly. Yet, this shouldn’t overshadow other structuring trends happening in the background, starting with the emergence of a new regulatory framework which will deeply reshape how AI is scaled. 

This blog post is the fourth of a series (check out the first one on France here, the second on Canada here, and the third on the EU’s AI regulatory proposal here) focusing on these directives, regulations, and guidelines — which are on the verge of being enforced — and their key learnings for AI and analytics leads. 

This month, we are making an exception! We are zooming on not one but a couple of ongoing legislative initiatives unfolding in the United States. We will be taking a bird’s eye view to learn about the shifting dynamics of one of the countries that has so far privileged AI self-regulation over regulation. 

Why? Since the EU’s AI Regulation proposal, which called for a ban on specific types of AI applications and tight governance of others, there has been a lot of discussion about the implications of such a proposal on countries that haven’t taken much of a stance yet. The U.S. is one of them and it seems like the Wild West days of AI are counted!

abstract

The Wild West of AI

In 2016, the U.S. government was one of the first to draw attention to the questions that progress in AI raises for society. Many guidelines, outlining principles, and recommendations followed from various government offices* but the motion hit a wall: there was strong belief that any form of further control would stifle innovation and economic prosperity for the country. In 2020, the administration even called for further deregulation

Wild West

In the Wild West of AI, there are no laws or lawmen.

Interestingly enough, the reaction of many companies was not to go ungoverned in the absence of a legal framework. We have notably seen many technology players establish their own ethical principles charter or dedicated governance committee as a way to organize their efforts and identify dos and don'ts.  

The days of self-regulation are now long gone. As companies increasingly deploy their solutions on the market and AI scandals are continuously echoed throughout the media, a framework to systematically analyze AI systems’ risk seems fundamental — especially when the solutions deployed are affecting all walks of life, nearly everywhere, at any time

In a timely manner, the National Security Commission for Artificial Intelligence’s March 2021 Final Report urged the adoption of a cohesive and comprehensive federal AI strategy. But what should this strategy look like for policymakers facing historical pushback on regulation? 

*The White House’s Office of Science and Technology Policy, the National Institute of Standards and Technology, and the Department of Defense Innovation Board, to name a few

The Wild West Isn't so Wild Anymore

The answer might lie in a series of initiatives that is adding pressure on policymakers. 

Throughout the years, at the federal level, there was only one proposed legislation that was attempting to swim against the tide. Introduced in 2019 in Congress, by Senator Wyden of Oregon, the Algorithmic Accountability Act (AAA), intended to make impact assessment a requirement for organizations using their software for sensitive automated decisions or to “make decisions that can change lives.” The bill would have been overseen by the Federal Trade Commission (FTC) and would have applied to both new and existing systems. This bill never progressed past committee level. 

Yet, with the new political landscape, and the growth in national frameworks — with the EU at the forefront — the bill could come back into fashion. It is planned to be reintroduced in a couple of months and is likely to benefit from much stronger momentum:

1. The bill echoes popular requests for end-customer protection. 

The AAA bill would require companies to assess their use of AI systems, including training data, for impacts on accuracy, fairness, bias, discrimination, privacy, and security, with the ultimate objective of better protecting end customers. Strong reactions to recent controversies have shown how important it is to U.S. citizens. The fact that these criteria and more are articulated in the European proposal might bring additional legitimacy to the new AAA bill.

2. The bill leverages a federal agency that has its own agenda. 

The bill entrusted the FTC to oversee its implementation, and rightly so! In a short blog post last April, the FTC outlined that it plans to go after companies using and selling biased algorithms. It also plans to verify claims about AI products that would not be “truthful, non-deceptive, and backed up by evidence.” Although it might take many legal challenges in court to make this a standard, it’s a very good start.

3. The regulatory environment for key sectors, alike pharmaceutical and financial services, is reaching maturity.

Industry-specific agencies, like the FDA (Food and Drug Administration) or the FED (Federal Reserve System) have been implementing requirements (GxP for life science organizations and SR 11-7 for financial services organizations) to ensure safety for sold products and services. As these organizations modernize and leverage AI, they need to match these requirements too. In other words, AI regulation might build on existing regulations which are already applicable to AI systems in specific industries.

4. The new administration is committed to modernize. 

President Biden’s initial economic recovery plan included a $9 billion increase to the Technology Modernization Fund (TMF), along with other funding measures to upgrade federal government technology and improve IT security. Although the TMF funding was cut, it showed a clear direction for the new administration on IT modernization.

5. New actors are feeling increasingly comfortable with tackling algorithmic discrimination.

Lawyers are updating their tools for the algorithmic age. Whether it is for housing discrimination, credit, or any fundamental services (similarly to the EU proposal by the way), lawsuits are creating precedent and are expanding customer rights. 

On the other hand, cities like New York are also proposing laws to regulate AI. In this case, the New York City Council wishes to update hiring discrimination rules to cover AI software. Similarly to the AAA, companies would be required to perform annual audits to ensure their technology is not discriminatory.

Indirectly, these elements are structuring the upcoming AI regulation debate that is waiting to happen.  It’s highly likely that in two years AI policy tools, whether binding or not, will be effective and enforced. Then, it might be too late for organizations that didn’t anticipate governance and compliance requirements.

Do You Have the Right Ammunition in Your Toolbox?

It’s hard to deny it — AI will need to rhyme with governance and compliance very soon. My advice is simple: don’t wait! If you do, it’s likely that your organization will create some AI governance debt.* For example, you might find yourself in a “hands up!” situation when the regulator asks you “Where are your models?” or “What are the metrics you chose to validate your models?”

pistol

Time to surrender the arms and get the right tools for AI governance!

As mentioned in the last article, there are already a lot of resources to kickstart your thinking on this topic. There are also some tools to make this easier! Funny you ask, that’s our job at Dataiku

Dataiku allows users to view the projects, the models, and the resources that are being used and how. You have model audit documentation to help understand what has been deployed. Plus, there is full auditability and transferability of everything from access to data to deployment, for each and every project the organization works on. 

As a supplier of an AI solution, we leave it to our customers to define and develop their own AI frameworks but, at the same time, we provide the technology and tools to govern AI and support compliance with upcoming regulations. It's a must-have for any active AI scaling journey!

*i.e., the implied cost of additional rework by choosing not to address AI governance stakes early enough in the development and commercial process

You May Also Like

🎉 2024’s Superlative Awards: 7 Dataiku Features That Stole the Show

Read More

The Dataiku GenAI Features Revolutionizing Enterprise AI

Read More

Dataiku Solutions: How They Work and How to Use Them

Read More