AI Got Funded. Results Lag Behind

Episode 20

Hi there, 

For the past two years, AI lived in experimentation.

Pilots. Proofs of concept. Internal demos.

That phase is ending.

AI is now moving into budgets — where expectations change. Not “Can this work?” but “What does this deliver?”

This shift is subtle, but critical. Because most organizations are not built for it.

Inside the Issue

  • AI is becoming a budget line — not an experiment

  • Why growing investment is not translating into impact

  • The structural gap between deployment and value

  • Where most organizations break

  • What this means for teams building AI systems

AI Is Becoming a Budget Line

Recent data shows a clear pattern: AI is no longer isolated in innovation teams — it is being embedded into core business spending.

According to Gartner, spending on supply chain software with agentic AI is projected to reach $53 billion by 2030, with rapid adoption across enterprises.

This is not experimentation. It is planned, functional investment.

At the same time, ownership is shifting from innovation teams into business units. Decision-making is moving closer to finance and operational leadership, and AI is increasingly treated as part of core infrastructure rather than an optional layer.

In practical terms, AI is entering the same category as ERP systems or data platforms — systems that are expected to deliver consistent, measurable outcomes.

Investment Is Scaling. Impact Is Not.

Despite this shift, the outcomes remain uneven.

Research from McKinsey & Company shows that while AI adoption continues to grow, only a smaller share of organizations report meaningful financial impact at scale.

Similarly, Deloitte highlights that many companies are still transitioning from pilot projects to production systems, and struggle to move beyond isolated use cases.

In other words, deployment is scaling faster than value.

Organizations are launching more initiatives, but relatively few are able to translate that activity into consistent business outcomes across functions.

The Real Bottleneck Is Not Technology

The common assumption is that limitations are technical: models are not good enough, data is not clean enough, or infrastructure is not ready.

But most evidence points elsewhere.

Analysis from Harvard Business Review shows that organizations often fail in the “last mile” — turning AI outputs into decisions and actions embedded in real workflows.

At the same time, PwC emphasizes that value realization depends far more on operating model, ownership, and process integration than on the models themselves.

The constraint is not intelligence. It is execution.

From Pilots to Pressure

The move into budgets introduces a new dynamic: accountability.

Once AI becomes part of core spending, it competes directly with other investments. It must justify its cost, demonstrate measurable return, and operate reliably over time.

This creates a level of pressure that did not exist during the pilot phase.

And it exposes structural gaps that were previously hidden. Many organizations lack clear ownership of outcomes, do not define success metrics upfront, and rely on fragmented data foundations. Systems that worked as controlled experiments begin to break under real operational conditions.

Designing for Budget-Stage AI

To operate in this new phase, teams need to fundamentally change how they approach AI systems.

It starts with defining value before building capability — understanding where impact will be created, how it will be measured, and within what timeframe. Without this, even technically successful systems struggle to justify continued investment.

Ownership also becomes critical. AI initiatives need a clearly defined business owner who is accountable for outcomes, not just a technical team responsible for delivery.

Equally important is the shift from building isolated features to designing full systems. This includes integration into workflows, reliable data pipelines, and feedback mechanisms that allow systems to improve over time.

Finally, AI systems must be designed for continuity. Unlike traditional software, they require ongoing monitoring, evaluation, and adjustment. Without this, performance degrades and value erodes.

From Experiments to Accountability

AI is entering a new phase.

From experimentation to expectation. From capability to accountability.

The question is no longer whether AI can work.

It is whether it can deliver — repeatedly, at scale, and under real business constraints.

If you are navigating this shift — from pilots to production, from experiments to measurable outcomes — we can help design systems that are built for it.

Sources & Further Reading

Thank you for joining us for another edition of The Foundation.

P.S. We want to make sure this newsletter hits the mark. So reply to this email and let us know what you think.