Last month, in a quick and insight-packed webinar, Conor Jensen, Global Field CDO at Dataiku, and Oz Karan, Partner/Deloitte Risk & Advisory at Deloitte, added their thoughts and expertise on the hot takes featured in our 2024 predictions e-magazine about the future of AI. Their discussion explored the trends and topics that have been more and more intensely dominating conversations around AI across all industries.
Go ahead and watch the session, or keep reading for the session’s top highlights — including their perspectives on how organizations can work to integrate trust across all phases of AI development and utilization.
Here’s six key topics from “Building AI Solutions With Trust by Design ft. Deloitte”:
GPUs: An AI Infrastructure Pillar
Talk Topic #1 Covered by Conor Jensen
From the tail end of last year and transitioning into the possibilities of 2024, a spotlight has shifted onto the coming era of compute cost volatility and the significant role GPUs (Graphic Processing Units) play in shaping the landscape.
Understanding GPUs’ significant role in innovation is crucial, as they manage graphics loading while excelling in handling AI, machine learning (ML), and deep learning computations. The influence of GPUs has surged over the past five to six years, particularly in the realm of LLMs, becoming pivotal in AI infrastructure.
However, the surge in demand for GPU power is outpacing supply, creating a vulnerability impacted by uncertain semiconductor supply chain dynamics. Global events further compound the supply stability issue, leading to a lack of availability and subsequently driving up costs. The critical question arises: How can organizations trust and leverage these technologies at scale?
Addressing the current challenge of GPU supply outpacing demand requires strategic planning. Organizations should proactively manage potential vulnerabilities arising from uncertain semiconductor supply chain dynamics and global events, and build trust in GPU utilization by adopting more transparent practices throughout their entire infrastructure.
Looking at LLMs in the Broader Ecosystem
Talk Topic #2 Covered by Conor Jensen
While there's pressure to focus solely on Generative AI and LLMs, it's crucial to recognize their place in the broader ecosystem, where AI, ML, neural networks, computer visions, and Generative AI all converge. This holistic outlook is essential for building a robust architecture and managing a cost-effective, reliable integration of Generative AI.
To address the challenge of managing LLMs effectively, it helps to separate the AI service levels from the LLM itself. By separating, an organization can ensure security and auditing for responsibility and trust for LLMs in their own scope, which will differ from the local management methods most commonly used before LLM adoption.
The World's Divided Perception of AI
Talk Topic #3 Covered by Oz Karan
The rapid pace of technological adoption has left many organizations congregated into one of two polarized viewpoints. The two contrasting mindsets ensnare many — fear and paralyzation versus idealism and moonshot AI endeavors. However, underlying both of these starkly different perspectives is actually a commonality. The common thread is trust, and with that common thread lies the key to finding a sustainable middle ground.
However, even with the mention of AI utilization, organizations should be aware of the challenge of trust erosion. Deloitte's TrustID Generative AI study revealed lower trust levels in organizations using AI purely on the basis of its mention. This sensation prompts an urgent need for adaptation and rebuilding trust. Organizations who want to continue their AI journey and tap into the real value of AI need to think strategically and act quickly to mitigate trust erosion.
Forward-Thinking Frameworks Instill Pervasive Trust
Talk Topic #4 Covered by Oz Karan
While examining trust decline, factors such as bias, privacy, explainability, and output reliance come to the forefront of the conversation.
Frameworks such as Deloitte's Trustworthy AI and Dataiku's RAFT framework for Responsible AI provide a foundation to instill leading practices that support trust building. Accountability, transparency, and fairness are common themes across these frameworks, emphasizing a comprehensive understanding of technology at each level and touchpoint in the enterprise. Taking it a step further, a robust risk matrix, enterprise-level AI risk reporting, technical guardrails, and governance structures are all also components of a future-minded strategy that become imperative for instilling trust.
Instilling trust with a framework isn’t just a band aid solution though. Building these practices and scaling AI with trust at the core offers ubiquitous benefits — regulatory guidance alignment, establishing AI controls, more effective communication with stakeholders, continued trust-building efforts, and more. These improved processes not only end up saving costs in the long run but also are what helps to prevent disastrous, unintended consequences that turn into nightmarish news headlines.
Who’s in Charge of Trust?
Talk Topic #5 Covered by Conor Jensen & Oz Karan
At the end of the day, this is an effort that has to be taken on in order to become real practices with impact. Lots of companies are approaching this differently, but the companies that seem to be the most successful in this goal are those that have someone at the C-suite level whose role is centered around advocating for the mission of trust. Obviously an organization’s maturity with AI is a factor in this being realistic but that’s the golden standard.
This “job” is also evolving quickly, but the naming doesn’t really matter. It is more about having that dedication at the top of an organization with the level of attention that this subject matter warrants.
Something else to note is that trust initiatives shouldn’t fall solely on one person’s shoulders either; trust should be baked into enterprise-level culture as well. Everyone should also have a seat at this table and input. The “trust mindset” should infiltrate every area of an organization across all use cases and not remain isolated to data teams.
The Evolution of Ethics and Regulatory Implications
Talk Topic #6 Covered by Conor Jensen & Oz Karan
As ethics becomes integral to governance, organizations must scrutinize their historical data and provide concrete proof of ethical practices. The evolving regulatory landscape, including the EU AI Act, underscores the need for robust regulations management.
In a landscape where AI's wider application demands a balance between value and trust precautions, organizations equipped with a trust-by-design approach are better positioned to navigate evolving regulations and emerging technologies. As the new era of compute cost volatility descends, the integration of trust frameworks becomes paramount for a sustainable AI future.
Putting It All Together
In conclusion, the evolving landscape of AI in 2024 emphasizes the critical role of GPUs in shaping infrastructure, the need for a holistic approach to LLMs within the broader ecosystem, and the imperative to address the divided perceptions of AI by rebuilding trust.
Instilling trust through forward-thinking frameworks, supported by accountable C-suite leadership and a permeated trust culture throughout the enterprise, emerges as the key to overcoming challenges related to bias, privacy, and explainability. As this evolution of ethics and regulatory implications unfolds, organizations adopting a trust-by-design approach are well-positioned to navigate the complexities of a rapidly changing AI landscape.