The Waterfall Trap of Spec-Driven AI Development
t has been fascinating—and honestly, a bit funny—watching the AI development ecosystem evolve over the last few months. Recently, there has been a massive surge in developers adopting what is being called “Spec-Driven Development” (SDD).
On the surface, SDD looks a lot like the pattern I outlined in AI-Augmented Software Development for sustaining the AI partnership: creating persistent context through summary files. Both approaches rely on maintaining a source of truth outside of the immediate chat window to keep the AI aligned.
But there is a critical, fundamental difference in execution. SDD is inadvertently reviving the old Waterfall methodology for the AI era.
In the rigid, one-way pattern of SDD, the workflow goes like this: You write a spec, and the AI generates the code. If the code is wrong, the methodology dictates that your spec was incomplete. You must go back, update the natural language spec, and have the AI regenerate the code iteratively until it passes.
Despite massive adoption, I’ve noticed that the actual work quality coming out of SDD workflows is largely quite bad. Why? Because struggling to write comprehensive, unambiguous human-language specs is, stereotypically, a universal characteristic of programmers. We are notoriously bad at it.
Even funnier is the industry’s reaction to this problem. Instead of recognizing that sometimes a rigorous programming language itself is the best and most efficient spec, the ecosystem doubled down. People started inventing “clean specs,” “spec definition languages,” and an entirely new discipline of “spec engineering.”
We are snowballing complexity. The reality of software is that the more layers of translation we have—from your brain, to English, to a “Spec Definition Language,” to an AI prompt, and finally to code—the more deviation the final product will have from what you actually had in mind.
This is exactly why Chapter 6 of my book argues that we can stop writing detailed requirements docs and instead focus on what matters: values and goals.
My approach, rooted in a Value-Driven AI Workflow, is much more akin to Agile methodology. You write a rough spec or a summary file, and the AI codes. If the code is wrong, that does mean the spec wasn’t perfectly complete. But instead of fighting the English document, you just fix the code right away. You collaborate directly with the AI, you fix the logic in the rigorous language of code, and then you update the summary files to catch up with reality.
The SDD crowd is falling into a trap by assuming that natural language is always the most efficient way to describe software. It isn’t. Try writing a purely natural-language spec for a highly complex, nuanced UI interaction, or an advanced mathematical formula. You will spend exponentially more time and effort wrestling with the English description than you would have spent just writing it in code in the first place.
AI completely changes the equation by shrinking implementation time, but that only works if we don’t invent new, bureaucratic hurdles. Summary files are the backbone of context continuity, but they are a living document, not a rigid contract.