This is a guest article from Alec Coughlin. Alec helps enterprise leaders cut through AI hype and agent washing to focus on what actually drives business value. He hosts “AI with Alec,” a podcast and thought leadership platform providing pragmatic guidance on enterprise AI transformation.
Enterprise AI transformation is full of noise. To move from noise to signal, there are four things every CEO can’t go wrong focusing on.
1. Application- to Data- to Knowledge-Centric Architecture
According to a Dataiku/Harris Poll survey, “Global AI Confessions Report: CEO Edition,” 74% of U.S. CEOs admit they’ll lose their job within two years if they don’t deliver measurable AI business gains.
Further, 82% of CEOs think AI will be an essential competitive differentiator within the next three years. Yet, today, CEOs admit 35% of their AI initiatives are more about optics than impact (“AI washing”), designed to signal innovation and boost reputation rather than deliver meaningful business value.
What should a CEO do to not lose their job in two years? How do they avoid “AI washing” and, instead, invest in building a rock solid foundation to future-proof their business by delivering “meaningful business value” immediately and over the long haul?
Recognize how significant the enterprise shift from application- to data- to knowledge-centric architecture being enabled by large language models (LLMs) is today and invest in modernizing their tech / data stack to capitalize.
Previous to LLMs, databases were built to enable SaaS applications to deliver value by helping businesses solve a specific problem(s) and generate an outcome accordingly. Unfortunately, this created siloed datasets as applications didn’t necessarily talk to each other.
As a result, data about the same process, customer, prospect, employee, and so on in one application was redundant to the equivalent found in another application but there really wasn’t a scalable way to reconcile this. LLMs have changed all of this.
Thanks to LLMs and their ability to understand language, data, and so on, it’s as if all of the walls that have been built between SaaS applications are coming down. As a result, the data in one application can talk to the data in another application and, therefore, a more complete and rich dataset can be generated.
This transformation creates impacts across the entire value chain, particularly when enterprises introduce AI agents into their operations. LLMs serve as the foundational technology enabling this architectural shift. LLMs function as the cognitive foundation that enables more sophisticated decision-making and intelligent systems by facilitating the enterprise transition from application-centric to data-centric architecture.
Pioneering organizations are evolving all the way to a knowledge-centric architecture, where ontologies are operationalized through knowledge graphs, unleashing the full potential of human and AI agent collaboration, at scale.
This architectural evolution leads directly to the second key focus area: the transition from deterministic to probabilistic applications.
2. AI Is Eating Software: Deterministic vs. Probabilistic Applications
“How many of the SaaS applications that we’re currently relying on as an organization are going to exist in 18-24 months?”
Not only is this question real, but it’s been asked of me by multiple Fortune 1000 senior business leaders recently. It’s an excellent question but extremely challenging to answer and has everything to do with how complex the enterprise “buy vs. build vs. partner” question is for Fortune 1000 senior leaders to answer today and going forward.
Deterministic applications are the cornerstone of an application-centric architecture. To simplify, deterministic applications can be understood as those designed to be used exclusively in a pre-determined way against a pre-determined, finite number of use cases.
Probabilistic applications are built with an LLM as a core part of their architecture so that they are intelligent. Instead of the constraints of deterministic applications above, they have the capacity to think, adapt among other capabilities which creates a lot more value as they’re capable of doing more and becoming more valuable over time.
This raises an important question: Why would organizations continue building deterministic applications when probabilistic alternatives offer superior capabilities? The answer lies at the heart of the disruption currently reshaping the traditional SaaS industry.
Another big benefit of probabilistic applications is that every time they’re used and interacted with, they get smarter, learn, adapt and evolve accordingly. Creating that much more value for the next person or AI agent that uses them while enhancing the impact AI agents can have on the augmentation of human workflows.
3. Enterprise AI Agents Augment Humans aka Superpowers
What’s the fastest and perhaps lowest risk way to generate value from human and enterprise AI agent collaboration?
As Jeff Bezos would say, focus on the things that don’t change. He said, “I very frequently get the question: 'What's going to change in the next 10 years?’ And that is a very interesting question; it's a very common one. I almost never get the question, ‘What's not going to change in the next 10 years?’
And I submit to you that that second question is actually the more important of the two because you can build a business strategy around the things that are stable in time … When you have something that you know is true, even over the long term, you can afford to put a lot of energy into it.”
It’s becoming clear that small pods of cross-functional teams enabled by an AI-first tech stack can produce outcomes that are orders of magnitude greater than traditional teams. It has also become clear that there have been a broad spectrum of enterprise AI agent successes and failures. So to Jeff Bezos’ point, narrow your focus onto the things that don’t change.
Operationalize an AI-first approach by creating small cross-functional teams focused on business goals that don’t change, operating on top of an AI-first tech stack while working in collaboration with a collaborative group of enterprise AI agents over a shared data-centric or knowledge-centric architecture.
This is where things become especially interesting as pioneering organizations can develop competitive moats ahead of their slower moving competition. Moats that can grow exponentially as these intelligent systems become more valuable with each interaction and create benefits that compound over time.
4. An Example? Give Your B2B GTM Team Superpowers
There is a large gap between enterprise AI technology capabilities and adoption. A stark example of this is the difference between today’s traditional B2B GTM motion and what’s possible when today’s modern, AI-first motion, enabled by an AI native tech stack, are capitalized on.
According to “Seizing the Agentic AI Advantage” from McKinsey, "Organizations must begin reimagining their IT architectures around an agent-first model, one in which user interfaces, logic, and data access layers are natively designed for machine interaction rather than human navigation. In such a model, systems are no longer organized around screens and forms but around machine-readable interfaces, autonomous workflows, and agent-led decision flows.”
I believe all B2B businesses will create an LLM-enabled operating system upon which small cross-functional human teams, in collaboration with AI agents, will deliver results that previously required teams three to five times their size.
Building on a lightweight POV I’ve published on ontology-aware intelligent systems, “Becoming an AI-First Company: Ontology Aware Intelligent System, GTM Use Case,” supported by my recent newsletter on this topic, magical things happen when you move the definitions, relationships, and business context necessary for AI agents to “understand the business” closer to the data.
Increasing execution velocity through robust data modeling and semantic layers that provide the “business truth” to AI systems, enabling higher quality sales pipeline to be generated faster and larger deals to be closed via shorter sales cycles.
Last but certainly not least, this intelligent system and collaboration between humans and AI agents enable humans to focus on the GTM aspects we’re best at and to do so, enabled in ways that were previously unimaginable before ChatGPT burst on the scene.
As Kurt Muehmel, Dataiku Head of AI Strategy said during his recent appearance on AI with Alec, “I think that inevitably we are going towards a hybrid human-agent workforce and we need the tooling to support that and we need the mindset in the organizations to support that as well.”
If you want to learn more, check out AIwithAlec.com, subscribe to my newsletter, YouTube channel or follow me on LinkedIn and X.