The Observation
Organisations spend extraordinary amounts of money on strategy development. They hire the best consulting firms, run the right workshops, and produce beautifully crafted strategy documents. Then they fail at execution.
This isn’t a new observation. Anyone who’s spent time inside a large organisation knows it intuitively. The strategy-execution gap is one of the most documented problems in management literature. What’s less well understood is why the standard explanations keep failing to fix it.
The conventional wisdom points to three usual suspects: culture, leadership alignment, and change management. These aren’t wrong — they’re incomplete. They describe symptoms, not root causes. And they’ve spawned an industry of frameworks that attempt to bridge the gap from the top down, as if execution failure is primarily a communication problem between the C-suite and the people doing the work.
It isn’t.
The Core Thesis
After 15 years of building and scaling data and AI capabilities inside complex organisations — from a 1,000-person technology organisation at Westpac to regulated healthcare AI at Cochlear — I’ve developed a different view of why strategies fail in execution.
The insight is this: execution failures leave diagnostic signatures, but those signatures don’t live where most people look. They live in the liminal spaces — the gaps between teams, between stated strategy and actual behaviour, between what gets measured and what actually matters, between what’s said in the boardroom and what happens on the ground.
These aren’t soft signals. They’re identifiable patterns that, once you know what to look for, are remarkably consistent across industries, organisation sizes, and strategy types. Traditional consulting frameworks treat these signals as noise. They’re not noise — they’re the diagnostic data.
This methodology is deliberately different from the prescriptive frameworks that dominate the strategy-execution literature. McKinsey’s, BCG’s, and Bain’s approaches all share a common assumption: that if you apply the right framework, execution will follow. The evidence suggests otherwise. Organisations that have faithfully applied these frameworks still fail at execution at roughly the same rate.
The approach I’ve developed is diagnostic, not prescriptive. It identifies the specific patterns of execution failure in a given organisation before recommending intervention. You can’t fix what you haven’t accurately diagnosed.
The Methodology
The methodology operates across three layers of organisational analysis:
Layer 1: Signal Detection
The first layer involves systematic identification of execution signals across the organisation. These signals fall into predictable categories:
- Behavioural divergence — where stated priorities and actual resource allocation diverge
- Translation gaps — where strategic intent gets distorted as it moves through organisational layers
- Measurement misalignment — where what gets tracked doesn’t correspond to what the strategy requires
- Temporal disconnects — where strategy operates on one time horizon while execution incentives operate on another
Each signal category has specific indicators that can be observed and documented. This isn’t sentiment analysis or culture surveys — it’s structured observation of organisational behaviour.
Layer 2: Pattern Recognition
Individual signals are noisy. Patterns are not. The second layer maps detected signals against known failure patterns — recurring configurations of execution breakdown that appear across different organisations and industries.
Some patterns are structural: the strategy requires cross-functional coordination, but incentive structures reward functional silos. Some are temporal: the strategy demands long-term investment, but the reporting cadence creates pressure for short-term wins. Some are political: the strategy redistributes power, but the execution plan doesn’t account for resistance from those who lose influence.
The critical insight is that these patterns are finite and classifiable. They’re not infinite variations on a theme. After observing them across enough organisations, you start to see the same failure modes repeating — different context, same underlying structure.
Layer 3: Diagnostic Mapping
The third layer synthesises the detected patterns into a diagnostic map — a structured representation of where execution is failing and why. This map becomes the basis for targeted intervention rather than generic recommendations.
The diagnostic map differs from a standard gap analysis in a fundamental way: it identifies causal chains, not just gaps. It shows how a measurement misalignment in one part of the organisation cascades into a behavioural divergence in another, which creates a translation gap in a third. Understanding these chains is what makes intervention targeted rather than scattergun.
Where This Comes From
This methodology didn’t come from academic research or consulting engagements. It came from being inside organisations that were trying to execute strategies and watching them struggle.
At Westpac, I built and scaled a technology and data organisation from 20 people to over 1,000 across five years. The bank had clear strategic ambitions around data and AI. The strategy documents were sound. But execution repeatedly stalled — not because the strategy was wrong, but because the liminal spaces between teams, between technology and business, between central and domain functions, were full of undiagnosed execution patterns. The same failures kept recurring in different forms, and the standard remedies (more alignment workshops, better communication, clearer OKRs) kept not working.
At Cochlear, the stakes are different. When you’re building AI systems that contribute to products used by over 700,000 people with hearing implants, execution failure isn’t just a business problem — it has real human consequences. The regulated healthcare environment compressed the feedback loop. Execution patterns that might take years to surface in a bank became visible in months when patient outcomes are on the line. That compression accelerated the development of this methodology significantly.
The pattern recognition came from watching the same fundamental execution failures appear across two radically different industries — financial services and medical technology — and realising that the failure modes are structural, not contextual. Different organisations, different industries, different strategies, same diagnostic patterns in the liminal spaces.
Applications
This methodology has particular relevance for three audiences:
Private Equity Firms and Portfolio Companies
Value creation plans are strategy documents by another name, and they fail at execution for the same reasons. PE firms invest heavily in developing the VCP, then hand it to portfolio company management teams with the implicit assumption that execution is an operational problem. It isn’t — it’s a diagnostic one. The methodology provides a structured way to assess execution capability before the VCP fails, not after.
Enterprise Leaders Responsible for Strategy Execution
If you’re a CTO, CDO, VP, or similar leader who’s been handed a strategy to execute and can feel it stalling, the standard advice is to improve alignment, communicate more clearly, and track more metrics. If you’ve already tried those things and they’re not working, the problem is probably in the liminal spaces that those approaches don’t reach. This methodology provides a different diagnostic lens.
Boards and Executives Suspecting Execution Failure
Boards often sense that strategy isn’t translating into execution but struggle to pinpoint why. Management reports show activity and progress on individual initiatives, but the strategic outcomes aren’t materialising. This methodology provides a structured way to look at execution that goes beyond status reports and milestone tracking to identify the actual failure patterns.
This research represents codified operational experience, not consulting theory. For more on how I apply this thinking, see my writing on strategy and AI or get in touch about advisory work.