Deloitte: An Opportunity to Build Trust

Scaling AI Oz Karan

The below article is an excerpt from our e-magazine "AI in 2024: Hot Takes From Dataiku, Deloitte, & Snowflake" and features insights from Oz Karan, Partner, Deloitte Risk & Financial Advisory at Deloitte. 

The age of AI is upon us with the dawn of a broad spectrum of Generative AI capabilities available to consumers. But much like the technologies themselves, trust in these paradigm-shifting technologies will not be built in a day.

As with prior cutting-edge tech, AI is greeted with a healthy skepticism from both organizations and consumers seeking assurances on the privacy and security of their data. This skepticism is further compounded by the black box these solutions can present to AI operators, owners, and developers. As organizations grapple with their use of AI, the line between trust in the machine and trust in the organization blurs.

As AI solutions quickly integrate into many facets of everyday life, consumers and employees won’t distinguish between trust in an organization and trust in its use of AI, or its AI output. This will make AI a strategic operational opportunity and a core tenet of brand and reputation management.

The question of “Whom can I trust?” is one we ponder when assessing another’s character, intent, or motivations. Personal trust is carefully cultivated over time; it requires consistency, familiarity, and prioritization of meeting that individual’s needs. So, when faced with a technology solution that obfuscates the ‘how’ and ‘why’ of its outputs while introducing questions like “Is my data the product?”, AI users must prioritize establishing and continually proving trustworthiness.

At Deloitte, we work to understand the risks, rewards, utility, and trustworthiness of AI so that we may help clients across industries, government, and commerce leverage the technology. Deloitte’s Trustworthy AITM Framework provides a cross-functional approach to help organizations identify key decisions and methods needed to maximize Generative AI’s business potential. 

We know from Deloitte’s TrustID Generative AI study of over 500 respondents that consumer trust decreases by 144% when consumers know a brand uses AI1, and that their perception of reliability drops by 157%1. Similarly, employees’ trust in their employers decreases by 139% when employers offer ‘AI technologies’ to their workforce1

Human trust in AI is an uphill climb. The organizations that focus on building AI solutions with trust by design may claim an advantage in the marketplace. 

trust scrabble letters

Understanding Trust Erosion Possibilities

AI is not infallible. Countless examples in the public sphere have proved as much from rogue text bots to intellectual property infringement to data exfiltration. Understanding and accepting that AI may fall short of human expectations can help limit the consequences of those failures and formulate responsible planning and response. If for humans, seeing is truly believing, it may be an onerous journey for AI to gain human trust.

Consider this example: Waymo’s joint study with Swiss Re insurance found the driverless rideshare company’s cars experience 76% fewer accidents involving property damage compared to human-driven cars2. But comparative statistics alone do not shape public perception, and the perceived risk of an AI solution often pales in comparison to its perceived reward for widespread adoption. 

The velocity of AI solutions’ availability necessitates balancing innovation with forward-thinking control mechanisms and technological guardrails. Trustworthy AI deployment will likely require a rebalancing of accountabilities and responsibilities across the organization, necessitating cross-functional relationships between operations and technology.

Usage of AI and accountability should be driven from the top down, with boards and management answering for AI incidents similarly to cyber events or regulatory noncompliance. Management that cannot or does not effectively speak to the organization’s use of AI could risk fostering a culture of nonaccountability.

Both consumers and employees expect organizational transparency when it comes to the use of AI, and the use of their data in AI. As regulators trend toward requiring informed consent from data subjects, organizations will need to both comprehensively understand and communicate their AI uses, not simply for regulatory compliance purposes, but to keep the lines of communication open with crucial stakeholders inside the organization and out.

Building AI Solutions With Trust by Design

Across the globe, regulators have identified common themes and core characteristics for organizations to consider as they develop and deploy AI. In the U.S., the White House established an AI Bill of Rights3 highlighting its AI priorities, followed by the National Institute of Standards and Technology (NIST) providing guidance to organizations in understanding, assessing, and managing risk with its own Artificial Intelligence Risk Management Framework4.

This early regulatory guidance provides an outline for proactive organizations to begin constructing guardrails for responsible AI usage. Most recently at the federal level, the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence5 seeks to establish safety testing for specific dual-use foundational models to identify and mitigate national security AI risks. The Executive Order also encourages other regulatory bodies to exercise their existing powers to investigate bias, fairness, privacy, and other potential harms rendered by the use of AI, much like some state-level regulations have for specific industries.

With the full weight of regulation still unknown, organizations should continue paying diligent attention to new rules as well as the application of existing regulations. Though hard to imagine, the complexity of today’s AI technologies will likely pale in comparison to the solutions to come. Organizations that wrangle with the challenges of establishing trustworthy AI design practices today can set themselves up for success into the future. 

So, what are trustworthy AI design practices? “Trustworthy AI” refers to the leading practices and ethical standards by which humans design, develop, and deploy the technology that makes it possible for AI machines to produce safe, secure, and unbiased outputs. This is different from “trust in AI,” which is a deeper, intrinsic trust between human and machine.

To establish “trust in AI,” organizations must embed knowledge, responsibility, and accountability to uphold organizational values and the trust afforded from consumers, employees, communities, and other constituencies. Organizations that articulate and highlight how they secure trust using AI technology in the right ways may have an easier path to gaining people’s trust in AI.

Those who continue to build and refine today’s AI platforms must keep trust top of mind as they research and develop the next generations of Generative AI. Think of it as “trust by design,” where AI designers weave the aspiration into the very fabric of the technology, with all regulatory expectations for safety, security, and accountability firmly in mind.

But How to Inspire Trust?

To further balance AI innovation with adequate control mechanisms, consider the value of establishing an AI Governance framework that’s aligned to your organization’s corporate values. Adhering to regulations and aligning with the organization’s values will help demonstrate responsible stewardship of AI both to employees and the public. 

By establishing a greater focus on AI controls, including third-party risk management, organizations can build the credibility they crave to form a bedrock of trust. 

A foundation of trust is slowly built upon transparency and easily damaged by omission. To effectively establish transparency regarding AI solutions, organizations can proactively share the results of safety testing, such as red-team testing called for in the recent Executive Order5. Explaining to core constituencies the types of guardrails employed to guard against potential harms of AI can help protect consumers6 and establish public trust.

The Rewards of Trust

AI and Generative AI are not domains which will be won or lost. The question is which organizations will win with AI? As we enter the Age of With7, trust can be the differentiator between success and failure in this technological revolution. By building solutions that incorporate trust into all stages of AI development and use, organizations stand to benefit both themselves and society. Organizations that can continually demonstrate prioritization of their key stakeholders through a transparent approach to adoption and effective use of guardrails will continue to gain the trust of the public.

Organizations today know AI is an imperative — those that deem trust of equal criticality will help realize the benefits of AI for all.

About Deloitte

Deloitte refers to one or more of Deloitte Touche Tohmatsu Limited, a UK private company limited by guarantee, and its network of member firms, each of which is a legally separate and independent entity. Please see www.deloitte.com/about for a detailed description of the legal structure of Deloitte Touche Tohmatsu Limited and its member firms. Please see www.deloitte.com/us/about for a detailed description of the legal structure of Deloitte LLP and its subsidiaries. Certain services may not be available to attest clients under the rules and regulations of public accounting. This publication contains general information only and Deloitte is not, by means of this publication, rendering accounting, business, financial, investment, legal, tax, or other professional advice or services. This publication is not a substitute for such professional advice or services, nor should it be used as a basis for any decision or action that may affect your business. Before making any decision or taking any action that may affect your business, you should consult a qualified professional advisor. Deloitte shall not be responsible for any loss sustained by any person who relies on this publication. 

1. Deloitte’s TrustID Generative AI Analysis. August 2023.
2. Comparative Safety Performance of Autonomous- and Human Drivers: A Real-World Case Study of the Waymo One Service
3. National Institute of Standards and Technology AI Risk Management Framework
4. The AI Bill of Rights follows the Executive Order 13960: Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government (December 2020). 
5. Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (October 2023)
6. Stanford University Human-Centered Artificial Intelligence 2023 Foundation Model Transparency Index
7. Deloitte’s The Age of With™ Exploring the future of artificial intelligence

You May Also Like

Dataiku Stories: Dynamic Presentations for Data-Driven Decisions

Read More

Complete Risk Management Across the AI Lifecycle With Dataiku

Read More

AI Literacy: CHRO’s Strategic Lever for Talent Transformation

Read More