Unpacking 3 of the Biggest Controversies in AI Today

Scaling AI, Featured Renata Halim

As AI (including Generative AI) reshapes industries, organizations are making decisions that reach far beyond technical specs; these choices influence innovation, security, and strategic direction. Should AI be regulated to ensure transparency and accountability, or could this stifle its growth? Is it wiser to build custom AI platforms tailored to unique needs, or leverage ready-made solutions for efficiency? And when considering infrastructure, how does the choice between hosted vs. self-hosted LLMs affect control, scalability, and security?

This blog will unpack three of the biggest controversies of AI today, offering data-backed insights and strategic takeaways to guide responsible, future-focused AI initiatives.

To Regulate or Not to Regulate?

As AI expands into critical and high-stakes applications, the debate over regulation intensifies. Advocates for regulation emphasize its importance for maintaining transparency and accountability, particularly in high-risk sectors like finance and healthcare. In contrast, critics worry that stringent regulatory frameworks could stifle innovation. The EU AI Act now assigns specific risk tiers and obligations, increasing the pressure on companies to stay compliant while keeping AI progress on track.

Dataiku empowers companies to navigate these regulatory demands responsibly with a suite of governance capabilities designed to support compliance at every stage of the AI lifecycle. Key capabilities include pre-assessment workflows to prevent non-compliant projects from progressing, tailored governance frameworks based on project risk, centralized documentation, and robust model traceability. Acting as a centralized compliance hub, Dataiku Govern enables teams to monitor, manage, and document AI workflows, ensuring transparency and control across projects.

đź’ˇStrategic Insight: Proactive governance with Dataiku not only simplifies compliance but strengthens organizational resilience and accountability. By providing tools for real-time monitoring, model tracking, and controlled LLM access, Dataiku enables organizations to adapt to evolving regulations without compromising their AI development momentum. This approach bolsters regulatory readiness and helps companies establish trust, positioning them as responsible leaders in an increasingly regulated AI landscape.

To Build or to Buy?

As companies navigate compliance requirements, they also face another crucial choice in their AI strategy: whether to build custom platforms or purchase ready-made solutions.

In the era of Generative AI, the build vs. buy decision has evolved. Previously, organizations faced a straightforward choice between developing an AI platform from scratch or purchasing a complete solution. Then, it transitioned into buying an end-to-end platform for AI (including Generative AI) versus building the connections between best-in-class tools in each area of analytics and AI. Now, with specialized, AI-powered point solutions available (especially for Generative AI), the conversation once again comes back to the best path forward. These point solutions, ultimately though, are not scalable. They augment one process, but do not provide any benefit to adjacent processes. 

Building in-house provides unmatched control and customization but introduces challenges, including technical debt and scalability limitations, especially as Generative AI capabilities rapidly evolve. Maintaining a custom-built platform also demands considerable resources, time, and adaptability.

Conversely, companies can accelerate AI deployment and reduce development overhead by purchasing point solutions or best-of-breed tools. However, these point solutions often lack scalability across broader processes, limiting differentiation. Relying on multiple external vendors for specific functions also increases dependency and technical debt, as each solution requires ongoing updates and maintenance.

💡Strategic Insight: The decision to build or buy should reflect an organization’s specific goals, resource capacity, and flexibility requirements. A balanced approach — using an end-to-end platform that supports both custom integrations and pre-built functionalities — can offer the best of both worlds.

An end-to-end AI platform like Dataiku enables organizations to operate seamlessly across these options. Built-in tools enable organizations to integrate new innovations from the AI field directly into their enterprise tech stack and business processes. This approach provides flexibility to scale AI responsibly, maintain robust governance, and incorporate external developments that enhance the organization’s unique processes and needs.

Self-Hosted or Hosted LLMs?

Alongside platform strategy, companies also face pivotal choices regarding their infrastructure — specifically, whether to self-host AI models or use off-the-shelf, hosted AI services from third-party providers.

Choosing between self-hosted and managed AI models extends beyond technical preferences; it significantly impacts data security, scalability, operational agility, and cost. Self-hosting provides maximum control over data privacy and infrastructure, which is critical in highly regulated sectors like finance and government. This option allows organizations to customize their models to specific needs without sharing sensitive data or prompts with external service providers. However, self-hosting can be resource and cost-intensive, often requiring access to substantial computational and hardware resources as well as specialized technical expertise.

In contrast, managed hosting solutions, typically offered by cloud providers, provide scalability, lower upfront costs, and access to advanced computational resources. These solutions streamline deployment and often include automatic updates, reducing the need for internal maintenance. However, reliance on third-party managed services can introduce dependency risks and pose data security and compliance challenges, particularly for organizations handling sensitive information.

To better understand how industry leaders handle these choices, we surveyed 400 senior AI professionals in May 2024 in collaboration with Databricks. The results showed that 85% are using or exploring hosted LLM services like OpenAI GPT API due to streamlined integration and broad market acceptance. However, open-source, self-hosted LLMs — including Meta Llama, Mistral, DBRX, and Falcon — are also on the rise, with 56% of respondents expressing interest in these models for their flexibility and customizable security features.

💡Strategic Insight: When selecting AI infrastructure, organizations should carefully weigh security, scalability, and operational requirements. A hybrid approach — self-hosting mission-critical models for added control and using managed hosting for less sensitive, scalable applications — often offers the best balance.

Dataiku’s adaptable platform supports both self-hosted and managed models, empowering companies to scale AI responsibly and efficiently. With robust governance and flexible deployment options, Dataiku enhances organizational agility, enabling businesses to stay competitive while meeting security and regulatory standards.

Strategic Choices in Regulation, Platform, and Infrastructure

Each decision — on regulation, platform strategy, or infrastructure — directly influences an organization’s capacity to integrate and scale AI effectively. Proactive governance builds trust, a flexible platform choice supports adaptability, and balanced infrastructure decisions ensure security and scalability. Organizations that align these choices with both immediate goals and a long-term vision establish a foundation for resilient, impactful AI that delivers sustainable value.

You May Also Like

AI Isn't Taking Over, It's Augmenting Decision-Making

Read More

The Ultimate Test of ChatGPT

Read More

Maximize GenAI Impact in 2025 With Strategy and Spend Tips

Read More

Taming LLM Outputs: Your Guide to Structured Text Generation

Read More