- Eric Ries
- Posts
- AI's Great Productivity Delusion
AI's Great Productivity Delusion
The human is the agent

For the Better comes to you with ideas about how and why to build companies focused on human flourishing and stories of the people who are doing it. Other enthusiasms may occasionally appear.
Over the past two years, LLMs have become more and more capable of performing long-running tasks reliably. As this chart shows, in 2025 alone they went from being able to tackle tasks that would take humans an hour to doing projects that would take us ten hours, with the same rate of success. (You can read more about the methodology and some of the implications of this finding here).

But even as LLMs make this leap forward, we’re also starting to see really significant problems with what happens when they’re deployed in this manner. Everyone is excited about vibe coding – which has moved beyond its niche engineering uses – and other ways of delegating work to AI. But there are some real pitfalls, and a real dark side, to this progress that must be addressed if we’re going to form beneficial relationships with agents moving forward.
In many cases, people are literally delusional about how much more productive they are with LLMs. In one study, developers using these tools reported being far more productive than they were without them. Subsequent analysis showed that in fact, “when developers are allowed to use AI tools, they take 19% longer to complete issues—a significant slowdown that goes against developer beliefs and expert forecasts. This gap between perception and reality is striking: developers expected AI to speed them up by 24%, and even after experiencing the slowdown, they still believed AI had sped them up by 20%.”
LLMs are dangerous for people who think they're making progress but actually aren’t, not least because they’re so good at simulating progress. As Armin Ronacher writes in “Agent Psychosis”: “We’re relying on these little companions to validate us and to collaborate with. But it’s not a genuine collaboration like between humans, it’s one that is completely driven by us, and the AI is just there for the ride. We can trick it to reinforce our ideas and impulses.”
The results of this work can range from underwhelming to unusable. Ronacher continues:
Quite a few of the tools I built I felt really great about, just to realize that I did not actually use them or they did not end up working as I thought they would.
The thing is that the dopamine hit from working with these agents is so very real. I’ve been there! You feel productive, you feel like everything is amazing, and if you hang out just with people that are into that stuff too, without any checks, you go deeper and deeper into the belief that this all makes perfect sense.
You can build entire projects without any real reality check. But it’s decoupled from any external validation. For as long as nobody looks under the hood, you’re good. But when an outsider first pokes at it, it looks pretty crazy.
Rachel Thomas compares that charge he describes from working with LLMs to the “dark flow” experienced by gambling addicts who continue to bet even when they’re losing, caught up in the illusion of winning perpetrated by the same machines taking their money. “Researchers on gambling addiction have coined the term to describe this insidious variation on true flow,” she writes:
Both slot machines and LLMs are explicitly engineered to maximize your psychological reaction. For slot machines, the makers want to maximize how long you play and how much you gamble. LLMs are fine-tuned to give answers that humans like, encouraging sycophancy and that they will keep coming back.
These are serious liabilities to be reckoned with. Those of us who use LLMs need to be aware of them lest we be seduced into a state of agent psychosis or dark flow. One way to steer clear of false progress is to approach LLMs not as collaborators, but as teaching machines.
People often ask me about the best ways to use Lean Startup these days. My answer is always the same: Lean Startup uses learning as the unit of progress, rather than any artifact.
The purpose of its tools is to help practitioners learn what they need to accomplish their goal – not to actually accomplish it for them. Similarly, LLMs are great for learning how to do a task or enhancing a skill. Using them to generate slop does neither of those things. Using them with a learning orientation is the most effective way to integrate agents into our lives. Keeping humans in the loop (like we do with Solveit) makes LLMs effective tools rather than agents of addiction.

Incorruptible is the culmination of everything I've learned about how to build successful companies that stay true to their mission, and where the ones that fail have gone wrong. It’s the manual I wish I'd had when countless founders and leaders first came to me.
It’s also something of a companion volume. The Lean Startup helps entrepreneurs build valuable organizations. Incorruptible explains how to protect them from the forces that threaten not just their missions, but their very existence.
Its pages are filled with case studies, stories from my own experiences as a founder and consultant, and tools for building truly incorruptible organizations.
It’s written for founders, executives, investors, and citizens of all kinds who want to help create and support companies – and a society – that are aligned with human flourishing.
The most important thing for a successful book launch is a lot of preorders.
So here’s my humble ask: If you liked The Lean Startup or anything else I’ve written or done, order a copy of Incorruptible as soon as possible.
After you purchase, you’ll be able to claim these 5 bonuses (and whatever else I dream up between now and May 26!)
Eric