Every vendor deck I’ve seen in the last eighteen months makes the same claim. AI coding assistants will make your developers 40%, 55%, sometimes 10x more productive. GitHub published studies. McKinsey published studies. Everyone published studies.
And yet, when I talk to engineering leaders, people actually running teams that build and ship software, almost none of them report faster delivery. Individual developers are writing code faster, yes. But the software isn’t shipping faster. Features aren’t reaching customers sooner. Release cycles haven’t compressed.
That gap is worth understanding.
The Typing Was Never the Bottleneck
The assumption behind AI coding tools is that writing code is the constraint on software delivery. Developers write code faster, software ships faster.
This assumption is wrong. It has been wrong for decades, and AI hasn’t changed that.
In most organisations, writing code represents maybe 20-30% of a developer’s working time. The rest is reading existing code, understanding requirements, waiting for reviews, debugging integration issues, navigating deployment pipelines, attending meetings, and dealing with the organisational friction that surrounds every release.
Copilot, Cursor, Claude Code and the rest have made that 20-30% noticeably faster. A developer who used to spend two hours writing a feature might now spend forty-five minutes. That’s real. But if the other six hours of their day are unchanged (the review queue, the unclear requirements, the flaky CI pipeline, the three meetings about the thing that could have been an email) the net effect on delivery speed is marginal.
You’ve made the fast part faster. The slow parts are still slow.
The Complexity Amplifier
Worse: AI coding tools, used carelessly, can actually slow delivery down.
When code is cheap to produce, people produce more of it. A developer who can generate a thousand lines of working code in an afternoon will do exactly that. The pull request lands. The reviewer now has a thousand lines to review. The test suite has a thousand more lines to cover. The codebase has a thousand more lines to maintain.
I’ve watched this play out in real time. Teams adopt AI coding tools and their commit velocity spikes. Code volume goes up. But the volume of code being reviewed, tested, and deployed doesn’t increase at the same rate, because the humans in those downstream processes haven’t changed.
The result is larger pull requests, longer review queues, more merge conflicts, and codebases that grow faster than the team’s ability to understand them. The AI made the coding faster. It made the delivery slower.
This isn’t an argument against AI coding tools. It’s an argument against treating code generation as the bottleneck.
Where the Time Actually Goes
If you want to understand why software isn’t shipping faster, trace where the time actually goes in a delivery cycle. For most enterprise teams, it breaks down roughly like this.
Requirements and alignment. 20-30% of elapsed time. Getting clarity on what to build, negotiating scope, resolving the gap between what stakeholders said and what they meant. AI hasn’t touched this.
Coding. 15-25% of elapsed time. This is the part AI has accelerated. A real win.
Code review. 10-15% of elapsed time. AI review tools exist, but most teams don’t trust them for anything beyond style checks. The substantive stuff, architectural mistakes, security issues, logic errors, is still done by humans. And those reviewers are the same people trying to write their own code.
Testing and QA. 10-20% of elapsed time. AI can generate unit tests, and it’s reasonably good at it. But integration testing, end-to-end testing, and the kind of exploratory testing that catches real bugs in real environments? Still largely manual. Still takes time.
Deployment and operations. 10-15% of elapsed time. Getting code from a merged pull request to production. CI/CD pipelines, environment provisioning, security scanning, change management approvals. None of this got faster because a developer wrote code faster.
Communication and coordination. 15-20% of elapsed time. Standups, planning sessions, dependency management, cross-team coordination. This is the dark matter of software delivery. It’s everywhere, it’s expensive, and it’s almost entirely unaffected by AI coding tools.
Add it up. AI has accelerated one slice of the process, coding, which represents maybe a quarter of total delivery time. Even if you made coding instantaneous, you’d see a 15-25% improvement in overall delivery speed. Noticeable, sure. But not the transformation the vendor decks promised.
The Integration Problem
There’s a dimension to this specific to enterprise environments. AI coding tools generate code that works in isolation. Getting that code to work within a complex existing system is a different problem.
A developer asks an AI to write a function that processes customer orders. The AI produces clean, correct code. But it doesn’t know about the legacy pricing engine with seventeen special cases. It doesn’t know about the compliance requirement that order data must be encrypted at rest with a specific key management approach. It doesn’t know that the database schema was designed in 2014 and has constraints that no longer match the domain model.
The developer gets a fast start, then spends hours adapting the generated code to fit the actual system. Sometimes this is faster than writing from scratch. Sometimes it’s slower, because the generated code bakes in patterns that are subtly wrong for the context. Refactoring AI-generated code into something that fits can be harder than writing the right thing in the first place.
This integration tax is invisible in productivity studies, because those studies measure tasks in isolation. “Generate a function.” “Write a unit test.” “Refactor this class.” In isolation, AI is much faster. In a real codebase, with real constraints, in a real organisation, the gains are smaller than the benchmarks suggest.
What Would Actually Make Software Ship Faster
If you want to compress delivery cycles, you need to attack the actual bottlenecks. Not the ones that are easiest to automate.
Reduce ambiguity in requirements. The single biggest time sink I see in enterprise delivery is rework caused by unclear requirements. A team builds what they understood, which isn’t what the stakeholder meant, which triggers a redesign mid-sprint. AI can help here. Use LLMs to analyse requirements for ambiguity, generate test scenarios from acceptance criteria, or create prototypes that stakeholders can react to before coding begins. Higher-leverage application of AI than code generation. Almost nobody is doing it systematically.
Accelerate review cycles. Code review is a throughput constraint in every team I’ve worked with. The senior developers who need to review code are the same people with the most demands on their time. AI-assisted review that handles the routine aspects (style, simple logic checks, test coverage analysis) and flags only the substantive issues for human review could cut cycle time significantly. But it requires trust in the AI’s judgement that most teams haven’t built yet.
Invest in deployment automation. The path from merged code to production should be as close to zero-touch as possible. Every manual step, every approval gate, every environment that needs manual provisioning, every security scan that requires human interpretation, is a delay. This isn’t an AI problem. It’s an engineering problem that most organisations have underinvested in because it’s not as visible as feature development.
Reduce coordination overhead. Conway’s Law still applies. The architecture of your system will mirror the communication structure of your organisation. If shipping a feature requires coordination between five teams, no amount of coding speed will make that fast. Organisational design (how teams are structured, what they own, how dependencies are managed) has more impact on delivery speed than any tool.
The Real Opportunity
This is what frustrates me. The opportunity is genuine and significant. But it’s not where most organisations are looking.
The real opportunity isn’t making individual developers type faster. It’s using AI to attack the systemic bottlenecks that have constrained software delivery for decades. Requirements ambiguity. Review throughput. Testing coverage. Deployment friction. Organisational coordination.
The companies that will actually ship software faster won’t be the ones that adopted Copilot first. They’ll be the ones that looked at their entire delivery pipeline, identified the real constraints, and applied AI, along with process and organisational changes, to the bottlenecks that actually matter.
The gap between promise and reality isn’t a technology problem. It’s a systems thinking problem. We’re optimising a component when we should be optimising the system.
Until that changes, the gap will persist. The developers will write code faster. The software will still ship at the same speed. And the vendor decks will keep promising a revolution that’s always just one more tool away.