AI regulation has been and will continue to be a hot topic. It matters for people, for places, for businesses, and for governments. The reasons and rationale for AI regulation are systemic — At a minimum, there are meaningful questions to ask, answer, and act upon about how people will be affected by automated decision making in a variety of contexts, ensuring:
- That, in those contexts, people are safe from harm
- That businesses are incentivized (and made to) do the right thing by societal standards
- That governments put in place the right guardrails that both encourage good practices and prevent poor ones.
Dataiku has published summaries of what’s happening in the world of AI regulation. And, as discussions around regulation evolve and standards develop in different domestic and international forums, as a company we have to ask, answer, and act upon what both legal and standards-related recommendations mean for us.
This blog post is a step in that direction by helping our customers and prospective customers understand where Dataiku fits into the regulatory and standards landscape.
Question #1: Where Do We Fit In?
Dataiku is, at its core, a platform that was conceived and continues to evolve in order to enable people across the world to benefit from analytics and data science. Our approach — Everyday AI — is inclusive and our product’s ambition is to serve our users (the widest range of users) to have access to the insights and opportunities data can provide them.
When we think of AI regulation that’s intended to benefit society — individuals and groups — we can’t help but nod in affirmation. The European Commission’s proposal that not all AI systems and use cases are of equivalent riskiness and that some should be considered ‘high risk’ is sensible. After all, building out models that decide how much a person can borrow, the quantity of their benefits, or that dictate the safety of an automated vehicle can, if implemented badly, bring end users to critical junctures that determine health, life chances, or even life or death.
Releasing AI into the world that has such power over life, without maintaining any assurances of performance, has potential to cause serious harm. This may seem like an extreme sentiment, but we are not unique in this position. Just look to the OECD, to the European Union, the United States, Singapore, the Council of Europe, and UNESCO. The list goes on.
And so as the global community is heading into a direction where AI is effectively moderated to ensure humans aren’t detrimentally impacted, doing so substantively means creating new practices that make that moderation possible. Given this, we at Dataiku have been inspired to think of what we offer to our customers and those who might one day think of using our service. We can’t help but wonder: Where do we fit in a world where new requirements are imposed on (or strongly encouraged to) developers, resellers, and users of AI?
Answering this question demands humility. We are not the world’s moral (or otherwise) compass. With that said, we are apprised of, recognize, and respect the direction of travel the world is going in. We also recognize that while the details of this direction are still being clarified, there’s room to help pave the road many businesses, researchers, and public sector entities will take.
As a platform, we have the scope to provide tools to help our customers think about what governing their development and deployment of AI can look like. We have scope to build out a form and function that seamlessly ties innovation and exploration to rules, requirements, and processes.
And so, to answer the question posed: We’re a facilitator and an enabler with a platform that makes it possible for the widest audience as possible to meaningfully leverage analytics and data science while seamlessly integrating governance practices.
Question #2: As a Facilitator and Enabler, What Should We Do in the Context of Regulation and Standards?
With a mission to enable and facilitate the widest possible use of analytics and data science, and do so in a world where expectations are fast evolving around how AI should be governed, it’s clear that we must be ready to enable our customers to do both. Whether our users are using our platform for one or a handful of models or planning to actively scale their use of analytics and AI across teams, we feel we must answer what we should do with respect to regulation and standards.
In search of the right balance, we’ve invested in the development of an approach available to our customers that helps them with building and integrating AI Governance into their organization’s analytics and AI development and deployment: Govern. An opt-in service, we’re embracing our role as facilitator and enabler so that using our product can help with observing and meeting requirements set within organizations or by new regulations and standards.
Critically, by observing customers from different industries and geographies, as well as the related and varying legal requirements, we recognize that supporting effective governance is not a one-size-fits-all approach.
To this end, in answering what we should do above and beyond creating core tooling through our Govern node, we are also prepared to work closely with customers and partners to enable and curate governance practices that meet particular needs. In short, what we should do is be flexible, adaptable, and supportive with our customers in addressing specific needs without being imposing.
Question #3: As a Facilitator and Enabler That Has Created AI Governance Capabilities, How Do We Fit in, in Perpetuity?
The point remains that the new requirements being clarified through regulation and standards are still evolving. Consequently, pointing to a given proposal and treating that as an absolute is suboptimal. What is possible, however, is to understand both regulatory and standards development as they evolve. To ensure that the product we’re delivering to our customers speaks to regulation and standards across time, we have committed ourselves to observing and engaging in the development of both regulation and standards.
Where we fit in as a collaborative, adaptable facilitator and enabler beyond providing the right tools is to build up our own capabilities so that we can provide views on regulation and standards and what they might mean practically. Again, embracing humility, we want to be at the forefront of discussions while working with our customers as we all learn this new landscape — all of which is fundamental to scaling Everyday AI safely.