When Building Gets Cheap, Thinking Gets Expensive
Let’s face it: the software industry has changed—and it’s still changing at a ridiculous pace.
Every few weeks, a new LLM drops that unseats the current champion, only to be dethroned again shortly after. As an end user, this is genuinely fun to watch. Each announcement feels like unlocking a new superpower. Models are faster, more capable, cheaper, and often more energy-efficient. We’re doing things today that would have sounded like science fiction not that long ago.
In many ways, we really are living in the future.
AI-forward companies—both in what they build and how they build—are taking full advantage of this. Features ship faster than ever. Prototypes appear overnight. Even non-technical roles are moving quicker, using AI to explore ideas, generate drafts, and unblock themselves.
And yet.
A recent tweet by @thdxr has been stuck in my head:
your org rarely has good ideas. ideas being expensive to implement was actually helping
That line has been quietly ruining my sleep.
Then, over the weekend, my nine-year-old wanted to watch Jurassic Park. There’s a scene where Jeff Goldblum’s character, surrounded by technological hubris, delivers the most important piece of product advice ever put on film:
Slow it down
Jeff was right. And so is Dax.
Slowness wasn’t a flaw in the old way of building software—it was a feature. The friction forced us to ask better questions. It created space for research, debate, and planning. It gave product managers time to understand users, designers time to explore, and engineers time to think before committing to a direction.
Everyone benefited.
Now, speed is the goal. Executives want faster delivery. Discovery gets compressed or quietly delegated to AI. And don’t get me wrong—AI is incredibly helpful here. It accelerates research, synthesizes feedback, and lowers the cost of exploration.
But we’re not at the point where it can replace human judgment in deciding what is worth building. Not yet, anyway.
My honest opinion is that we’re moving too fast—and it’s causing us to build the wrong things. I see this at work, and I’m guilty of it in my personal projects too. When building becomes cheap and easy, it’s tempting to skip the uncomfortable question:
Should we build this at all?
Ideas are cheap. Everyone has them, and that’s a good thing. Share them. Argue about them. Pressure-test them. I’ve lost count of how many ideas I thought were great until the discovery process revealed they weren’t. Or they were—but not as impactful as something else. Sometimes the best outcome is watching an idea morph into something entirely different – and even then, it might not make the cut.
That's OK. That thinking phase is part of the work.
The brutal reality is that building things just because we can is not a winning strategy. Even if the latest state-of-the-art model enables something that wasn’t possible yesterday, that alone isn’t a reason to pivot tomorrow.
Speed is powerful. But without restraint, it doesn’t lead to progress—it leads to noise.
Noise in the product. Noise in the documentation. Noise in the code.
Your team already feels it. They might not say it out loud, but they feel the drag: more edge cases, more cognitive load, more “temporary” decisions that never quite get revisited. Your customers will feel it too—and they’ll figure it out faster than you expect.
Then comes the expensive part.
Undoing the noise means disbanding teams. Reworking infrastructure. Deleting code that took real effort to write. Rewriting documentation. Retraining people. And, in the hardest moments, admitting to customers that something you shipped confidently was the wrong call.
Speed made it easy to build. It did not make it easy to unbuild.
That’s the paradox we’re running headfirst into. AI removes friction from implementation, but friction was never the enemy. Friction was the thing quietly forcing us to think.
If we don’t reintroduce some form of intentional slowness—real discovery, real debate, real judgment—we’re not moving faster. We’re just getting better at making mistakes at scale.