
Is AI really improving our productivity?
The tools are smarter. But are we actually getting more done?
Ask anyone in tech right now and they'll tell you AI has changed the way they work. Developers are shipping features faster. Writers are drafting articles in minutes. Managers are summarising long documents with a single click. The productivity narrative around AI is loud, confident, and practically unavoidable in every dev community, newsletter, and conference talk.
But here's a question that's been sitting quietly in the back of my mind: is any of that actually true?
Not in an anecdotal sense — I'm sure you have a story about how Claude or Cursor saved you an afternoon. I'm asking at a larger, more uncomfortable scale. Are we, as developers, as a workforce, as an economy, genuinely more productive because of AI? Or are we just busier in newer, shinier ways?

The numbers don't quite agree with the hype
Every few months, a consulting firm drops a report claiming AI could add trillions to global GDP. The headlines write themselves. But when you look at actual productivity data — the kind economists track, like output per hour worked — the picture is far murkier.
This isn't new territory. In the 1980s and 90s, computers were transforming every desk job on the planet. And yet the economist Robert Solow famously quipped, "You can see the computer age everywhere except in the productivity statistics." It took nearly two decades for the productivity gains from computing to actually show up in the numbers.
We might be in the same moment with AI right now. The tools are genuinely impressive. The macro impact? Still mostly potential.
Some studies do show real gains. A well-known GitHub study found developers using Copilot completed tasks about 55% faster. That sounds like a slam dunk until you read the fine print: the tasks were isolated, well-defined coding exercises — not the messy, ambiguous, cross-functional work that makes up most of the actual job.

Are we producing more, or just generating more?
There's a distinction worth making here that often gets glossed over: output versus outcome.
AI is extraordinarily good at helping us produce output. More lines of code written. More boilerplate generated. More pull requests opened. More features "done." And the same applies beyond code — more emails, more decks, more reports, all with the same unanswered question.
But output isn't the same as outcome. The question isn't whether you wrote the code faster. Did the code actually solve the user's problem? Did the refactor make the codebase easier to work in six months later? Did the email achieve what it needed to achieve? Did the report lead to a better decision? Did the proposal win the client?
There's a real risk that AI is making us faster at producing things that don't matter, while the harder, more valuable work — the thinking, the judgment calls, the creative leaps — still takes just as long as it always did. We're not short on content. We never were. We were short on clarity.
The hidden cost nobody's counting
Here's something I've noticed in my own workflow: AI doesn't eliminate work. It shifts it.
When Claude or Copilot generates a block of code for me, I now have a new job: reviewing it carefully, understanding what it actually does, editing out the confidently-stated inaccuracies, and making sure it fits with the rest of the system in ways the model has no context for. That's real time. And because the code looks right at first glance, there's a temptation to skim the review — which is exactly how subtle bugs quietly make it to production.
Then there's the verification tax. AI can be confidently wrong. It'll generate code that compiles and runs but doesn't handle the edge case you care about. It'll suggest an approach that works in isolation but creates a bottleneck at scale. Every AI-generated output carries an implicit cost: someone — usually you — has to think critically about whether it's actually correct. That's not free time.
In some professions, like law or medicine, the review burden of AI output is substantial enough that practitioners have started to question whether it's actually saving them anything.
There's also the cognitive overhead of prompting itself. Getting a genuinely useful output from an AI model isn't just typing a question. It involves knowing what to ask, how to ask it, what context to provide, and how to evaluate the result critically. That's a skill, and it takes time to develop. For experienced developers, this becomes natural. For everyone else, it adds friction that rarely shows up in the "AI saved me X hours" calculations.
Add to that the time spent staying on top of which tools to use, which models are best for which tasks, and the constant churn of new features — and you start to wonder how much of the "AI saves time" narrative is actually spent keeping up with AI itself.

Not everyone is winning equally
There's another dimension to this that the productivity headlines tend to flatten: the gains from AI are not evenly distributed.
Studies suggest AI tools disproportionately benefit people who are already skilled at their jobs. A senior engineer using Copilot gets a meaningful boost. A junior engineer, still building their mental model of how systems work, might produce code faster but understand it less — which creates problems down the line that nobody counts against the original productivity gain. That gap compounds over time. You end up with more code shipping, and a team with a shakier foundation underneath it.
In knowledge work broadly, the people best positioned to leverage AI are those who already have the domain expertise to direct it, critique it, and know when it's wrong. For everyone else, the risk is outsourcing the learning itself — the struggle that builds real capability — to a tool that gives you the answer without the understanding.
That's not a knock on AI. It's a knock on how uncritically we're adopting it and measuring its value.
The Jevons paradox, but for time
There's an old idea in economics called the Jevons paradox: when a resource becomes more efficient to use, we don't consume less of it — we consume more. Steam engines became more fuel-efficient in the 19th century, and coal consumption went up, not down, because now more things were worth powering with steam.
The same thing might be happening with AI and time.
When a feature that used to take two weeks now takes three days, the natural response isn't to slow down and be more deliberate. The response is to pull more tickets, take on more scope, promise more in the next sprint. The time savings get immediately reinvested into more work — often work that didn't exist before AI made it feel feasible.
This is Parkinson's Law in another guise: work expands to fill the time available. AI creates new time, and we are remarkably good at filling it. Whether we're filling it with the right things is the question nobody wants to slow down long enough to ask.

So why does everyone feel more productive?
This part is worth being honest about: productivity and the feeling of productivity are not the same thing.
AI tools are genuinely satisfying to use. You type, something impressive appears, and you feel like you've accomplished something. That dopamine loop is real. But feeling like you moved fast is not the same as actually moving in the right direction.
There's a well-documented psychological phenomenon called the "busyness heuristic" — we equate being busy with being valuable. AI feeds that feeling brilliantly. Your output is higher, your response times are faster, your to-do list moves. Whether any of it is adding real value is a separate question.
Most engineering teams aren't measuring productivity at the outcome level — are users getting more value? Is the system getting healthier? They're measuring velocity, features shipped, tickets closed, issues fixed. At that level, yes, AI looks great. But zoom out, and you have to ask whether the features were the right ones, whether the shortcuts created long-term costs, and whether the velocity was taking you somewhere worth going.
The real question is harder
I don't think AI is a fraud. I use it, I find it useful, and I believe it has real potential to change how we work — just not in the smooth, linear, everything-gets-faster way the headlines suggest.
The honest version of this conversation isn't "is AI increasing productivity?" The honest version is: what kind of productivity do we actually want?
The best engineering work — the kind that builds systems that last, solves real problems, and actually makes users' lives better — has always been slow, uncertain, and hard to shortcut. It requires deep context, careful judgment, and the kind of understanding that comes from sitting with a problem long enough to really see it. AI can help around the edges of that work. But it hasn't changed what's at the centre of it.
Before you measure how much faster AI made you, it's worth asking whether what you made faster was actually worth making at all. That question has nothing to do with the tools. It's still entirely on you.
And maybe that's the uncomfortable truth the productivity narrative is dancing around.
References
- Research: Quantifying GitHub Copilot's impact on developer productivity and happiness — GitHub Blog
- The Impact of AI on Developer Productivity: Evidence from GitHub Copilot — arXiv (academic paper)
- The Solow Productivity Paradox: What Do Computers Do to Productivity? — Brookings Institution
- Productivity Paradox — Wikipedia
- Jevons Paradox — Wikipedia
- What is Jevons Paradox? And why it may — or may not — predict AI's future — Northeastern University
- Parkinson's Law — Wikipedia
- Being busy and the illusion of productivity — Ness Labs