AI made some things 10x cheaper. The things it didn't are now the bottleneck.
AI collapsed the cost of writing code, tests, and scaffolding. That's real. GitHub Copilot, Cursor, Claude — they made the mechanical parts of programming dramatically faster.
But software engineering was never just typing. The question that matters is: where does the effort actually go when you're building and shipping software?
Toggle between eras to see how AI reshaped effort allocation — and what it left behind.
When per-unit cost drops, you ship more. Normalize to 100% and the bottlenecks grow.
Each one is a fundamental constraint that doesn't yield to "more AI". Click to expand.
More components don't just add up — they multiply. Every new piece interacts with every existing piece.
Pairwise interactions = n × (n-1) / 2
The same pattern is playing out in academic research.
"Writing papers" collapsed. But peer review didn't. Scaled up, it now dominates.
Tao's answer: formal verification with Lean. Proofs that machines check, not humans.
We're doing the same thing for software.
Tests check the cases you thought of. Bugs live in the cases you didn't. More code = more blind spots.
Effective with 3-5 reviewers. AI-generated PRs already wait 4.6x longer. Double the throughput and it collapses.
Style guides, naming rules, architecture patterns. Humans forget them. AI never learned them. They break silently.
Catches formatting and simple patterns. Can't reason about architectural constraints, data flow, or cross-module invariants.
These worked when throughput was lower.
They don't work at 1.5x, and they certainly won't work at 3x.