Every board meeting, every strategy session, every investor call in 2025 includes some variant of: “What’s our AI strategy?”
The pressure to “do AI” has never been higher. The challenge is figuring out what’s real value versus what’s expensive theater.
After three years of AI implementations across financial services, healthcare, and other sectors, here’s what I’ve learned about separating hype from reality.
The Hype Cycle Pattern
Phase 1 (2022-2023): “We need a GenAI POC!”
- Everyone built chatbots
- Demos looked incredible
- Almost nothing went to production
Phase 2 (2024): “We need an AI strategy!”
- Hired AI leads
- Created AI centers of excellence
- Launched 50 AI experiments
Phase 3 (2025 - Now): “Show me the ROI”
- CFOs are asking hard questions
- Budgets are tightening
- Time to separate winners from pretenders
What’s Actually Working
1. Document Intelligence (Real Winner)
The hype: “AI will understand all your documents!”
The reality: Extracting structured data from unstructured documents is genuinely transformative.
Real use cases:
- Contract analysis: Extracting key terms from thousands of contracts (saved us 2000+ hours)
- Invoice processing: Reducing manual data entry by 80%
- Compliance review: Flagging regulatory risks in documentation
Why it works:
- Clear ROI: Hours saved Ă— hourly rate
- High accuracy: 95%+ with human review
- Immediate value: Payback in months, not years
Investment range: $50K-500K depending on scale
2. Code Assistance (Quietly Powerful)
The hype: “AI will replace programmers!”
The reality: AI makes good programmers significantly more productive.
We deployed GitHub Copilot across engineering. Results after 6 months:
- 25% faster feature delivery
- 30% reduction in boilerplate code time
- Improved code quality (fewer syntax errors)
Why it works:
- Measurable productivity gains
- Low friction adoption
- Reasonable cost: $20-40 per developer per month
The catch: Only works if you already have good engineers. It doesn’t magically create capability.
3. Customer Service Augmentation (Mixed Results)
The hype: “AI chatbots will handle all customer service!”
The reality: AI handles simple queries well. Complex issues still need humans.
What works:
- Tier 1 support: Password resets, status checks, FAQ
- Agent assist: Suggesting responses to human agents
- After-hours: Basic support when humans aren’t available
What doesn’t work:
- Complex problem-solving
- Emotionally charged situations
- Anything requiring judgment calls
Key metric: Containment rate. Aim for 40-60% of queries resolved by AI. Above 70% usually means you’re frustrating customers.
4. Predictive Analytics (The Unsexy Winner)
The hype: “AI will predict the future!”
The reality: Narrow prediction problems with good data work incredibly well.
Real wins:
- Churn prediction: Identifying at-risk customers 30 days early
- Demand forecasting: Reducing inventory costs by 15%
- Fraud detection: Catching suspicious patterns in real-time
Why it works:
- Clear problem definition
- Good historical data
- Measurable business impact
The catch: These aren’t new. They’re traditional ML/AI dressed up in new marketing. But they still work.
What’s Not Working (Yet)
1. “AI Agents” for Complex Workflows
The promise: Autonomous AI agents that handle multi-step business processes end-to-end.
The reality: Too brittle for production. Edge cases break them constantly.
Verdict: 12-24 months away from enterprise readiness. Keep experimenting but don’t bet the business on it.
2. Generative AI for Domain Expertise
The promise: AI that can make expert judgments in specialized domains (medical diagnosis, legal analysis, financial advisory).
The reality: Works for pattern recognition, fails at true expertise that requires judgment and context.
Verdict: Augmentation tool for experts, not replacement. Will remain this way for foreseeable future.
3. Enterprise-Wide “AI Transformation”
The promise: Big bang AI transformation across the entire organisation.
The reality: These initiatives usually fail. Too broad, unclear ROI, organisational resistance.
What works instead: Targeted use cases with clear value, then expand.
The ROI Framework That Actually Works
Before investing in any AI initiative, answer these questions:
1. What’s the specific problem?
- ❌ “Improve customer experience”
- ✅ “Reduce average handle time in customer service by 20%”
2. What’s the baseline?
- Must measure current state before AI
- Need control groups to prove AI impact
3. What’s the expected ROI?
- Calculate: (Benefit - Cost) / Cost
- Target: >200% ROI within 18 months
- Reality: Factor in hidden costs (data prep, change management, ongoing maintenance)
4. What happens if it fails?
- Can you pivot?
- Can you shut it down?
- What’s the exit cost?
Investment Tiers That Make Sense
Based on company size and maturity:
Tier 1: Starter (< $50K/year)
- GitHub Copilot for developers
- ChatGPT Enterprise for knowledge workers
- Document processing for specific high-volume workflow
ROI timeline: 3-6 months
Tier 2: Intermediate ($50K-500K/year)
- Custom document intelligence solutions
- Customer service AI augmentation
- Predictive analytics for specific business problems
ROI timeline: 6-12 months
Tier 3: Advanced ($500K-2M/year)
- Custom LLM fine-tuning
- Multi-use case AI platform
- AI center of excellence with dedicated team
ROI timeline: 12-24 months
Tier 4: Strategic (>$2M/year)
- AI-first product development
- Foundational model development
- Organization-wide AI capability building
ROI timeline: 24-36+ months
My Recommendations by Organization Type
If you’re a startup:
- Focus on Tier 1 tools
- Make your product AI-enabled, not AI-first (unless that’s your core differentiator)
- Don’t hire an “AI team” yet
If you’re mid-market:
- Pick 2-3 Tier 2 use cases
- Build internal capability
- Prove value before expanding
If you’re enterprise:
- Create portfolio approach: Tier 1 broadly, Tier 2 selectively, Tier 3 experimentally
- Build platform team to support use cases
- Balance centralization (standards, infrastructure) with federation (domain use cases)
The Hard Truth
Most organizations are over-investing in AI experiments and under-investing in:
- Data quality: AI only works with good data
- Change management: Technology is easy, adoption is hard
- Measurement: You can’t improve what you don’t measure
Before adding another AI POC, ask: “Do we have the fundamentals right?”
What I’m Actually Betting On
In my own organisation, here’s where I’m putting resources:
High conviction:
- Code assistance for engineering productivity
- Document intelligence for operational efficiency
- Customer service augmentation (with human oversight)
- Predictive analytics for known problems
Medium conviction:
- GenAI for content creation (internal documentation, training materials)
- AI-assisted decision support (recommendations, not decisions)
Low conviction / experimental:
- Autonomous agents
- AI-driven strategy
- Creative domain AI
The Bottom Line
AI is real. The value is real. But it’s not magic, and it’s not appropriate for every problem.
The winners in 2025 are organizations that:
- Pick specific, measurable problems
- Start small and prove value
- Invest in fundamentals (data, change management, measurement)
- Scale what works, kill what doesn’t
Stop chasing every new AI announcement. Start solving real problems with appropriate tools.
That’s how you turn AI hype into AI value.
Want to discuss what AI investments make sense for your organisation? I’m always happy to reality-check AI strategies.