As with all other key aspects of the economy, the global health crisis we are going through is having a deep impact on AI developments in organizations. In an environment compatible with remote work, COVID-19 acts as a catalyst for reinforced data usage: many companies need to develop a strong data-supported understanding of the new normal — and react accordingly. Yet, this shouldn’t overshadow other structuring trends happening in the background, starting with the emergence of a new regulatory framework which will deeply reshape how AI is scaled.
This blog post is the second of a series (check out the first one here) focusing on these directives, regulations, and guidelines — which are on the verge of being enforced — and their key learnings for AI and analytics leads. Today, we are zooming in on the recently introduced Bill C-11, also known as the Canadian Federal Data Privacy and AI Regulation bill. Bill C-11 largely reflects the recommendations of the national Data Privacy Regulator (OPC*) and sets a precedent in the government’s ambition to better manage risks linked to AI. The bill is not law yet, it still needs to proceed through committee review and probably industry consultation throughout the year. Please note that in this blog post, we will not be covering Quebec’s Bill-64 which primarily focuses on replicating the already active European Data Privacy regulation (GDPR).
*OPC: the Office of the Privacy Commissioner (''Commissariat à la protection de la vie privée au Canada'') is the administrative authority responsible for overseeing the compliance of the two federal privacy laws. One for the public sector, the Privacy Act, which covers the personal information-handling practices of federal government departments and agencies, and one for the private sector, the Personal Information Protection and Electronic Documents Act (PIPEDA).
Reforming the Existing Data Privacy Law to Address the Disruptive Nature of AI
The recommendations released by the OPC in November 2020 are the result of a two-year long process driven by the conviction of the OPC that, while AI can drive significant benefits, it also presents fundamental challenges to data privacy principles. One of the outcomes is the decision to revisit Canada’s main data privacy law, known as PIPEDA (Personal Information Protection and Electronic Documents Act), which defines dos and don’ts on the consensual collection, processing, and analysis of personal information for commercial purposes.
Why? As part of their work, the OPC identified that the current data privacy regulation does not satisfactorily apply to AI systems in the private sector. Among key concerns is the very nature of AI systems, or (personal) data intensive systems, can easily contradict with fundamental data principles alike minimization. Similarly to other data regulators around the world, the OPC also expresses concerns on the capacity for AI to drive decisions which can deeply affect citizens, raising privacy risks, unlawful bias, and discrimination.
How? To confirm the initial PIPEDA reform proposals, the OPC initiated a public consultation in January 2020 to collect feedback from both field experts and civil society on how to address AI challenges with regulation. After 86 submissions, two in-person consultations, and a policy report, the OPC published key recommendations on Nov. 13, 2020 for regulating AI systems (details to follow below).
What is the impact? The recommendations of the OPC were later used to support the introduction of the Canadian privacy law reform bill in Parliament, also known as the Bill C-11. This upcoming law enacts the OPC recommendation through the Consumer Privacy Protection Act (CPPA) and the Personal Information and Data Protection Tribunal Act. They both create new regulatory tools to address compliance, remedies for non-compliance, and a tribunal to address appeals to these remedies. Let’s find out what this all means for the usage of AI systems by the private sector in Canada!
How the Bill C-11 Is Expected to Deeply Reshape AI Usage in Canada
The recommendations of the OCP were nearly all included within the Bill C-11 and fall into two categories:
- Data protection: Aligning with existing data protection regulations, the Bill C-11 redefines modalities and exceptions for data collection consent (see a reminder below), sets new data principles (data portability and mobility), and establishes privacy-preserving techniques (de-identification).
- Introducing AI regulation: The Bill C-11 is the first-of-its-kind in its introduction of regulatory tools for AI, including new rules for automated decision systems as well the regulators ability to audit organizations and write fines. Canada, by doing so, is the first country to issue binding requirements related to AI.
1. Modalities for Collecting, Processing, and Analyzing Personal Information
Here are the modalities for obtaining consent and when consent is not needed:
When to collect consent: Private organizations can collect personal information only with consent before or at the time of collection or before any new use or disclosure of such information.
What information should be shared to obtain consent: Any time consent is collected, the organization must notify individuals in plain language of the type of personal information that the organization collects, uses, and discloses, and of the purposes, manner, and consequences of the collection, use, and disclosure.
What information should be shared to maintain consent validity: Organizations must have available at any time information, again in plain language, that explains their policies and practices put in place to fulfil their CPPA obligations. Organizations must also identify any third parties to whom personal information will be disclosed.
Exception to consent: Consent doesn’t need to be collected either when the personal information is de-identified, when the personal information is transferred to service providers, or when consent is impractical to collect for public interest or legitimate purposes.
2. Rules for AI Design
Conversely to the GDPR, the Bill C-11 establishes a right to explanation both for AI systems that would replace the judgment of a human decision maker but also assists such decisions. The scope of the explanation includes why the decision, recommendation, or prediction was made as well as how the individual personal information was used.
Organizations must also describe in their policies any automated decision system that could have significant impacts on individuals and ensure it is both auditable and offers suitable levels of explainability.
What does it mean for organizations? It widens corporate compliance of AI projects. For example, when the bill C-11 would become law, any recommender system currently leveraged to assist decisions in the financial services, the retail industry or life sciences would require compliance. Besides, organizations need to be able to track and retrieve information easily, especially regarding impact assessments.
3. Potential Fines for Non-Compliance
The Bill C-11 is a page turner for organizations when it comes to its enforcement model. Non-compliance with federal privacy law can ensue a penalty of $10,000,000 and 3% of the organization's annual gross global revenue and up to $25,000,000 and 5% of an organization's annual gross global revenue for more important breaches.
How does it work? The Commissioner has the power to recommend a penalty after an audit and the newly established Personal Information and Data Protection Tribunal imposes the penalty to the organization. The Commissioner can also impose an order in specific cases without the tribunal. These orders can yet be appealed to the tribunal within 30 days.
What are the rights and obligations for organizations that collect and use data? Organizations have a private right of action to premise recourse to the courts in certain circumstances.
While the Bill C-11 is not yet law, it gives a solid indication of where Canada is heading and should be a powerful signal to organizations on the need to reinforce their frameworks on AI.
What Does This Mean for Organizations Scaling AI?
It’s only the beginning! The Bill C-11 is likely to be modified as it moves forward with the legislative process but binding regulation seems inevitable, especially since there are more OPC recommendations to be implemented, like the right to contest AI decisions and the need to conduct privacy impact assessments.
Although we are only at the beginning of AI regulation compliance, we observe a strong incentive system for organizations to start thinking through AI governance. What does it mean for my organization? What kind of resources do I need? More people, more processes, more technology? All three?
If we take Bill C-11’s right to explanation as a compliance example, it can be challenging for organizations to deliver AI explanations and data reporting without the right processes or the right people to design them. If we also think that there is more than one AI system in need of compliance, manual processes might be not enough.
The role of platforms like Dataiku is to help organizations understand how they work with data and build AI systems to later provide best-in-class explanation and data reporting.
At Dataiku, we look at AI governance as an opportunity to build resilience by developing an operational model to let AI grow organically, eliminate silos between teams, and have a high-level view of what is going on. Dataiku allows users to view the projects, the models, and the resources that are being used and how. You have model audit documentation to help understand what has been deployed. Plus, there is full auditability and transferability of everything from access to data to deployment, for each and every project everyone works on.
As a supplier of an AI solution, we leave it to our customers to define and develop their own AI frameworks but, at the same time, we provide the technology and tools to govern AI and comply with upcoming regulation. Where to start then? If you have not yet, please have a look at the first article of this series! The French financial services regulator provides key insights into building the right AI governance processes.