The AI Investment Dilemma: From Hype to Enduring Value
About This Document
72% of enterprises have AI in production. 88% use it in at least one function. Yet 80%+ report no measurable impact on EBIT — and only around 30% report positive ROI. The problem isn’t the technology. It’s that most organizations are investing in AI before defining the problem it’s meant to solve, measuring success by cost savings alone, and consistently underestimating what adoption actually requires.
This article draws on a GAP-hosted webinar featuring technology leaders from DEPT and PepsiCo to examine the patterns separating high-performing organizations from the rest — covering strategy before spend, redefining ROI, designing for constant change, and why human oversight remains non-negotiable.
Full Content Below
Read the Full Document
Explore the complete publication below
AI maturity is no longer measured by how many pilots you launch or tools you deploy. It’s defined by something far less flashy and far more difficult: strategic clarity and disciplined execution. Organizations that create lasting value don’t treat AI as a feature or experiment. They integrate it into how decisions are made, how work gets done, and how the business evolves — while keeping accountability, context and judgment firmly human-led.
Yet most companies are stuck:
- 72% of enterprises now have AI in production
- 88% use AI in at least one function
- Yet 80%+ report no measurable impact on EBIT
- And only ~30% report positive ROI
Even though most organizations are investing in AI, very few are operationalizing it — and the average may be further behind than many believe.
To unpack what this looks like in practice, we hosted a webinar, “The AI Investment Dilemma: From Hype to Enduring Value,” featuring Bridget Fahrland – VP of Applied AI at DEPT, Fern Johnson – former CTO, VP of Infrastructure & Operations at PepsiCo and Joyce Durst – CEO and Co-founder of Growth Acceleration Partners.
The discussion explored three critical themes shaping enterprise AI today:
- Moving beyond experimentation into measurable, repeatable value
- Why strategy, governance and data readiness determine success
- Understanding that adoption, culture and human oversight ultimately make or break success
Here’s what technology leaders are seeing from the front lines — and what it means for your next AI investment.
Why AI Initiatives Fail, Even When the Technology Works
Across industries, one conclusion is hard to ignore:
AI isn’t failing because the technology falls short. It’s failing because organizations approach it the wrong way.
The same patterns show up again and again:
- Most organizations invest in AI before defining the problem it’s meant to solve — treating it as a capability to deploy rather than a lever to drive specific business outcomes.
- They also get stuck in perpetual pilot mode — launching experiments that generate activity, but never translating into operational impact or scalable value.
- At the same time, ROI is often framed too narrowly. Cost reduction becomes the default metric, even though real competitive advantage comes from better decisions, faster execution and improved customer outcomes.
- But the biggest failure point isn’t technical — it’s human. Organizations consistently underestimate what it takes to drive adoption: shifting how teams work, aligning leadership and building trust in AI-driven processes.
The Core Tension: Speed vs. Sustainability
Technology leaders are under pressure to deliver fast ROI while building systems that won’t collapse in 12 months. In many organizations, this tension is compounded by a lack of baseline clarity — teams are expected to prove value without fully understanding current processes, costs or performance. Without that foundation, measuring impact becomes inconsistent and often misleading.
At the same time, AI innovation is reshaping the approach to technology investments. An environment where tools and capabilities change rapidly increasingly misaligns traditional models built for long-term stability. Systems designed to last for years can quickly become outdated, while purely short-term solutions fail to scale.
But most organizations aren’t ready for either. As discussed in the webinar:
“Brands want cost savings — but they don’t even know what things cost today.”
You can’t prove value if you don’t understand your baseline. Yet many teams skip that step entirely.
To succeed, organizations must shift toward more adaptive approaches, including:
- Delivering near-term value while building flexible, scalable foundations
- Prioritizing speed without sacrificing structure and governance
- Designing systems that can be continuously improved, replaced or expanded as the technology landscape evolves
The Market Lens
The macro environment reinforces the execution gap.
- Gartner forecasts that 30% of GenAI projects will be abandoned after proof of concept, a predictable outcome when value definition, data readiness and risk controls are treated as afterthoughts rather than design inputs.
- BCG reports that only 26% of companies move beyond AI proofs of concept to create real value, highlighting that success depends on strategy and execution — not experimentation alone.
At the same time, organizations are scaling their commitment:
- McKinsey finds that 52% of $500M+ organizations have established dedicated teams to drive genAI adoption, validating the need for embedded delivery squads to scale AI beyond pilots.
- Additionally, Gartner reports 56% of software engineering leaders name AI/ML engineers as the most in-demand role, reinforcing why nearshore AI engineering capacity is becoming a competitive advantage.
And when results fall short:
- BCG’s AI Radar 2026 shows 24% of organizations will ramp resourcing and invest in outside experts when AI impact lags — aligning with GAP’s ability to accelerate delivery through nearshore, outcome-oriented teams.
The market isn’t lacking investment; it’s lacking execution discipline.
What does it mean for technology leaders? The advantage is shifting from those who explore AI to those who find a way to operationalize it.
What High-Performing Organizations Do Differently
Leading organizations are not experimenting more; they are executing with precision and intent. The difference shows up in a few critical ways:
1. Strategy Before Spend
Every AI investment is tied to a clear business problem and measurable impact — not curiosity or pressure to “do something with AI.” As Fern Johnson put it:
“If you’re just putting in cool tech… the money’s going to dry up.”
2. They Balance Quick Wins with Long-Term Thinking
Leading organizations run parallel tracks:
- Immediate use cases that show value
- Foundational work (data, governance, architecture)
This creates momentum without accumulating technical debt.
3. They Redefine ROI
Top performers move beyond cost savings as the primary success metric. They measure what actually drives competitive advantage:
- Decision speed
- Output quality
- Customer experience
- Team productivity
Because even though cost savings are the easiest to measure, they are rarely the most strategic.
4. They Design for Constant Change
The old mindset was durability. The new reality is adaptability: build fast, learn faster and replace when needed. High-performing organizations assume:
- Tools will change
- Models will evolve
- Architectures will need to be revisited
As discussed in the panel, today’s stack might be relevant this quarter and outdated the next. They don’t build for permanence. They build for evolution.
5. They Treat Adoption as a Leadership Responsibility
This is where most organizations fail and where top performers win.
They understand that: deployment ≠adoption, training ≠behavior change and mandates ≠engagement. As Joyce Durst emphasized:
“If you just tell people, ‘Use AI’… that is definitely not going to work.”
Instead, leaders must:
- Model AI usage themselves
- Create safe environments for experimentation
- Tie AI adoption to individual and team growth
Because in the end AI doesn’t create value — people using AI do.
6. Human-in-the-Loop by Design
A defining trait of high-performing organizations is that they never remove the human from the equation — they elevate them.
Rather than positioning AI as a replacement, they design systems where:
- Human judgment remains central
- AI augments decision-making
- Accountability stays with people
This is especially critical in complex, high-stakes environments where context, ethics and nuance matter. The result: faster execution, better decisions and greater trust across the organization
Because while AI can accelerate outcomes, only humans can ensure they’re the right ones.
A Practical Filter for AI Investment Decisions
Before funding your next initiative, ask:
- Do we understand the current process and its cost?
- Are we solving a real business problem?
- Is AI actually required — or just fashionable?
- How will this scale beyond a pilot?
- What behavior change is required?
Remember:
“The difference between value and testing is whether you have a strategy.”
Where GAP Fits In
GAP helps organizations move from experimentation to production through:
- AI strategy tied to business outcomes
- Data and infrastructure readiness
- Autonomous, AI-enabled engineering teams
Let’s turn your AI ambition into measurable value.