The Way We've Always Done It
Why many familiar development practices no longer make sense in an AI-augmented world
In This Chapter:
- The distinction between timeless software development values and outdated processes
- How traditional methods evolved to work around human cognitive limitations
- Why many familiar development practices no longer make sense in an AI-augmented world
- The common mistake of automating existing workflows rather than rethinking them
Software development has accumulated decades of best practices, rituals, and traditions. Some of these have earned their place through proven effectiveness, while others persist largely through inertia. As we enter the age of AI-augmented development, it’s time to take a thoughtful look at what we do and why we do it—separating what’s valuable from what’s merely familiar.
The Good (Values) and the Outdated (Processes)
Over decades of building software, our industry has figured out what really matters. These core values aren’t tied to specific tools or trendy methodologies:
- Quality: Software that actually works and does what users need
- Maintainability: Code that other humans can understand and change without cursing your name
- User focus: Building solutions that address real people’s actual problems, not just technical exercises
- Efficiency: Getting the most bang for your development buck
- Time-to-market: Delivering value quickly enough to matter in the business context
- Agility: Responding to changing stakeholder needs without derailing the entire project
- Collaboration: Solving tough problems by putting heads together
- Continuous improvement: Getting better instead of just getting older
These values stuck around because they work. Quality builds trust (and prevents those 2 AM production alerts). Maintainable code saves future you from hating past you. User focus ensures we’re building things people actually want instead of solutions looking for problems. Efficiency means delivering more without burning out. Time-to-market keeps businesses competitive. Agility helps teams navigate the ever-changing business landscape. Collaboration beats struggling alone. And continuous improvement keeps you from becoming that developer still writing code like it’s 1999.
But here’s where it gets interesting: these timeless values led to specific processes that are starting to look like flip phones in a world of smartphones.
Take documentation. Everyone agrees good docs are valuable—they help new team members, preserve knowledge, and give you something to read at 3 AM when production is down. But the process? Manually writing documentation after the fact, usually reluctantly, in documents that age like milk rather than wine. We did it this way because human memory and typing were once our only options.
Or consider requirements gathering. The value of understanding what to build before building it is undeniable. But the process—lengthy specification documents, requirements that become outdated before coding begins, and the eternal game of telephone between stakeholders and developers—creates waste and frustration on all sides.
Then there’s Jira ticket tracking and effort estimation. Tracking work and planning capacity makes sense in theory. But in practice? Hours spent in planning poker sessions debating whether something is a 5 or an 8, updating ticket statuses that no one looks at until the sprint review, and creating detailed burndown charts that inevitably resemble cliff diving competitions rather than gradual descents.
Let’s not forget the endless meetings. Daily standups that mysteriously expand from “15 minutes max” to an hour, retrospectives where the same three improvement ideas circulate on an eternal carousel, and those architectural discussions where everyone joins while secretly checking Slack and doing their own things. You’ve seen the memes about “this meeting could have been an email”—they’re funny because they’re true. In the traditional world, we accepted this meeting madness as the price of coordination. But when AI can generate code faster than we can discuss how to write it, spending half your Tuesday in a room arguing about implementation details feels like bringing a horse to a Formula 1 race.
Code reviews reveal a similar pattern. Their value is substantial when done well: they catch bugs, spread knowledge, and maintain consistency in the codebase. The best code reviews are collaborative learning experiences that make both the code and the developers better. But the process as commonly practised—write code in isolation, submit PR, wait for someone to have time (while they’re in meetings!), address feedback, wait again—introduces delays and context-switching that are increasingly hard to justify. The valuable parts (knowledge sharing, error catching) we want to keep. The overhead parts (waiting, context switching, process delays) we could definitely improve.
And testing? We’ll always need to verify our software works. But writing repetitive test cases for every edge condition? That process exists because humans had to manually think of and code for each scenario, with all the excitement of filling out tax forms.
See the pattern? We’ve built processes around human limitations—our memory caps out, we type slowly, our attention wanders, and we can’t instantly recall that StackOverflow answer from three years ago. These processes weren’t random; they were workarounds to maintain precious values while being human in a pre-AI world.
So now we face a question: If many of our processes were designed around limitations that AI can help with, which ones should we rethink? And more importantly, how do we keep what matters while changing how we get there?
We don’t need to throw everything out—many traditional practices still make sense. But we should separate the timeless values from the “that’s how we’ve always done it” processes. Some processes might need minor tweaks; others might be ready for a complete makeover.
What matters is being thoughtful about this shift—understanding not just what we’ve been doing, but why we’ve been doing it that way, and whether those reasons still hold up when we have an AI teammate that never gets tired, forgets, or needs a coffee break.
When Methods Reflect Our Human Limitations
“Wait, what does this code even do?”
It’s 3 AM. The production system is down. You’re staring at an unfamiliar function written by a developer who left the company two years ago. There are no comments, no documentation, and the variable names offer cryptic clues at best. We’ve all been there.
This scenario highlights something fundamental: software development methods aren’t arbitrary rituals—they’re adaptations to our very human limitations. Some evolved through painful experience, others through careful design, but all address the gap between what our brains can handle and what modern software demands of us.
Take our working memory—the mental scratchpad where we manipulate information. Cognitive science tells us it can hold roughly 4-7 items simultaneously. Yet even a modest application might contain thousands of functions across dozens of files. This cognitive mismatch led to modular architecture, where systems are broken into digestible components with clear interfaces. We don’t design this way because computers need it—a machine would happily execute a million-line monolith. We do it because we need it.
“Documentation is a love letter to your future self.” This developer wisdom contains truth. Those architecture diagrams and wiki pages serve as external memory, letting us reference complex designs without having to remember every detail. They compensate for our memory’s tendency to fade and distort, especially when we’re juggling multiple projects or returning to code after months away.
Our collaborative nature creates another set of challenges. In the early days of programming, a single person might write an entire application. Today’s systems are too complex for that approach. Yet coordination doesn’t come naturally—left unchecked, we drift into information silos and misaligned efforts.
Enter the daily standup: “What did you do yesterday? What will you do today? Any blockers?” Simple questions that create synchronisation points where teams align their understanding before diving back into individual work. Some developers roll their eyes at these rituals, but they exist precisely because our awareness of what others are doing has natural limits.
The code review process addresses a different limitation: our blind spots. Even the most skilled developers miss things—security vulnerabilities, edge cases, performance issues. It’s not negligence; it’s being human. By distributing the cognitive load across multiple brains, we catch more problems than any individual could alone.
Planning reveals perhaps our most notorious limitation. The conversation often goes something like this:
“How long will this feature take?”
“About two weeks.”
Three weeks later
“So… about that two-week estimate…”
It’s not that developers deliberately mislead—it’s that creative problem-solving follows unpredictable paths. We discover complexities only after diving in, making it very unlikely we can know exactly how long things will take. Developers, burned by past experiences, pad their estimates generously. Product owners push back, creating a negotiation dance that consumes surprising amounts of energy.
Even code itself bears witness to our limitations. Ever worked on a project where each file followed completely different formatting, naming conventions, and patterns? The cognitive cost is substantial—like trying to read a book where each chapter uses different grammar rules. Coding standards exist because consistency reduces mental load. Those automated linting tools that enforce trivial-seeming rules about indentation and brace placement? They free our limited attention for more important matters.
Our testing practices have evolved similarly. Humans naturally focus on the “happy path”—the expected flow through systems. We’re optimists at heart, imagining users who enter perfect data and follow predictable patterns. Reality, of course, has other ideas. Test-driven development flips this tendency by forcing consideration of edge cases before writing functional code. It’s a deliberate counterbalance to our natural inclination to oversimplify.
Agile methodologies emerged partly in response to our psychological need for visible progress. Three months into a year-long waterfall project, with nothing working yet, motivation inevitably wanes. The human brain craves feedback and completion—it’s how we’re wired. Two-week sprints with demonstrable deliverables provide the psychological rewards that keep teams engaged.
Requirements gathering addresses perhaps the most fundamental human limitation: the illusion of clarity. “The system should be user-friendly” sounds perfectly reasonable until five different people share five completely different interpretations of what that means. User stories, acceptance criteria, and specifications create shared reference points, reducing the misalignment that naturally occurs when humans communicate.
These methods aren’t perfect—far from it. They’re pragmatic responses to the challenge of being human in a field that demands inhuman precision. They represent our collective attempts to bridge the gap between the messy reality of human cognition and the exactitude required by machines.
Understanding these connections helps us see that many software practices aren’t arbitrary bureaucracy, but essential adaptations to our cognitive architecture. They’re neither good nor bad in absolute terms—they’re tools we’ve developed to work around the limitations of the most complex component in any software system: the human mind.
Software Development: For Humans, By Humans
Now we’ve established that humans have limitations and that many of our methods evolved to uphold our core values despite these constraints. But how do we ensure things actually go according to plan?
This is where our already-imperfect work processes often go completely off the rails. We genuinely want good documentation for our future selves, yet we end up reluctantly typing docs after stern reminders from managers, only to discover months later that these carefully written words have nothing to do with the actual code. No one knows what’s true anymore, but everyone’s too busy shipping the next feature to fix it.
We start with well-thought-out architectures, proudly designed to be “future-proof.” Then business priorities shift—as they always do—and suddenly features we dismissed as irrelevant become urgent priorities, while those clever structures we carefully designed gather dust like treadmills turned into clothes hangers. So we hack and patch to deliver what’s needed now, good architecture be damned!
Hoping to prevent this waste, we try narrowing our focus to immediate requirements, trying to force everyone to live in the present. After all, who can really see the future? Yet developers and business folks still want to plan ahead—nobody wants to build the same thing twice. So the same problems continue, but now with an even hazier big picture. Before we know it, there’s no real architecture left—just technical debt piling up as features are duct-taped together, creating that classic “big ball of mud” that everyone knows about and swears to avoid, yet somehow keeps rebuilding.
Planning meetings grow longer and more painful. Time estimates become wild guesses as nobody can confidently predict how anything works anymore. Will adding one feature break ten others? Will this clever workaround mess up something else in the codebase? Eventually, fixing the old code takes as much effort as starting over, but starting over is out of the question given deadlines and budgets. So people sigh, shrug, and learn to live with it—the universal developer experience.
Our current software development processes were designed to work with human nature, yet it’s that very same human nature that makes these processes break down. We create systems to overcome our limitations, then struggle with those systems’ own problems, creating new processes to fix those issues, until we’re stuck in layers of procedures that don’t even make sense anymore.
Then a new player enters the game: AI. Could this be our ticket out of this mess? Maybe—but only if we’re smart about how we use it. Simply automating our existing, flawed processes would just help us reach bad outcomes faster.
Trapped in Procedures That Don’t Make Sense Anymore
Have you ever looked around during a lengthy sprint planning session or a particularly tedious stand-up meeting and wondered, “Why exactly are we doing this again?” You’re not alone. Many software development processes, once innovative solutions to real human limitations, now persist simply because they’ve always been done that way. These outdated rituals remain ingrained, despite sometimes hindering rather than helping the teams they’re supposed to support.
Consider the traditional requirement specification document: pages upon pages outlining every minute detail of a feature, painstakingly created—and instantly outdated the moment coding begins. These documents were crucial when everyone needed to see the exact same picture before starting their work so each piece could fit together. But the devils are in the details. They needed to be agreed upon by all parties. And when somebody suggested any revision, the whole stack needed to be redone again, and the whole cycle continued.
Today, with near-instant code generation using AI that I’ll show you later, when some part is revised, all the others can quickly follow at high speed. Do we really need minute-detailed documents to coordinate multiple parties anymore? Or are we just clinging to familiar routines that have outlived their usefulness?
Another classic example: the notorious “meeting to prepare for another meeting.” Once, this made sense—coordination was difficult, expensive, and required careful orchestration. But now, excessive meetings often drain time and energy, generating more overhead than insight. We sit in rooms (virtual or physical) discussing work instead of doing it, then complain we don’t have enough time to do the work we just spent hours discussing. With current communication tools and practices designed for rapid, focused interaction, the hours spent locked in discussions are increasingly difficult to justify.
Then there’s the meticulous task estimation process—hours spent debating if something is a “3” or a “5,” even though everyone knows estimates rarely hold up once real-world complexities emerge. Shouldn’t we be debating what we should build and why instead of how much effort would go into it? The effort was the point of debate because it costs money, but with AI helpers the cost of coding has reduced so much that debating about the effort doesn’t really make much sense anymore. We persist because that’s just how it’s always been done, not necessarily because it’s the best way to get meaningful predictions or manage risk.
These procedures persist not because they’re always beneficial, but because they’re embedded in our workflows, reinforced by habit and familiarity. They were designed when human limitations—slow communication, limited attention spans, imperfect memory—required such scaffolding. But as our capabilities have evolved, many of these traditional approaches no longer match the efficiency, responsiveness, and immediacy we now enjoy.
Our challenge, then, is to recognise when we’re holding onto rituals that once served us well but now slow us down. It’s not about dismissing every familiar practice, but about critically examining their purpose and utility in a world that’s continuously changing around us. And with AI entering the software development arena, that change is accelerating faster than ever before.
The Mistake Everyone Makes: Just Automating Old Workflows with AI
There’s a common pitfall when teams first adopt AI: thinking of it solely as a faster way to automate existing tasks. They assume that the primary benefit is doing the same old work—just quicker and with less manual effort. Unfortunately, this mindset often leads to disappointing results.
Consider a common scenario: Imagine a development process where four different teams sequentially work through documents, passing them along like relay batons. Each team spends valuable hours refining content, translating ideas, and clarifying misunderstandings. Now, someone suggests using AI to speed up this relay race by automating the document creation and editing process. Sure, the documents might move more swiftly from one team to the next—but the fundamental inefficiency remains. The process itself—a serial hand-off—hasn’t changed. You’re still playing a slow, sequential game, just marginally faster.
Or take another familiar situation: reaching a key project decision currently requires five meetings, complete with multiple slide decks, pre-meeting agendas, and follow-up emails. You think, “Let’s automate this!” So you deploy AI to rapidly generate those presentations and agendas. The AI might save some prep time, but does it genuinely improve the decision-making process? Are fewer meetings held, or are you merely generating more polished materials for the same inefficient decision-making cycle?
The core mistake in both examples is clear. Instead of rethinking the process itself, you’re using AI merely to speed up tasks that perhaps shouldn’t exist in their current form at all. It’s like building a faster horse when what you really need is a car. Automating inefficiencies doesn’t eliminate them—it only makes them quicker inefficiencies.
True AI-augmented software development isn’t about doing old tasks faster. It’s about transforming your approach, reconsidering the entire workflow, and reshaping the way you solve problems and collaborate. If you’re only thinking about AI as a faster typist or document formatter, you’re barely scratching the surface of what’s possible. The real innovation occurs when you allow AI to challenge your assumptions about why you do things the way you do.
This doesn’t mean abandoning everything familiar. Remember those core values we discussed earlier? They’re still vital. But the processes we built around those values were designed for a different era—one with different constraints and capabilities. AI gives us an opportunity to revisit those processes while preserving what truly matters.
In the next chapter, we’ll explore how to move beyond mere automation toward true transformation—keeping our eyes on the enduring values of software development while reimagining how we bring them to life.
Conclusion
Throughout this chapter, we’ve examined the traditional software development processes that have guided our industry for decades. We’ve seen how many of these processes evolved as reasonable responses to human limitations—our finite memory, our tendency to miss edge cases, our difficulty coordinating in large groups, and our struggles to predict the future accurately.
We’ve also recognised how easily these once-useful processes can calcify into rituals that persist long after they’ve stopped delivering value. The sprint meetings that run twice as long as they should, the requirements documents that no one fully reads, the estimation exercises that rarely reflect reality—these aren’t just minor annoyances. They’re symptoms of a deeper issue: our tendency to confuse the process with the purpose.
As AI enters the software development landscape, we face both an opportunity and a risk. The opportunity lies in transcending limitations that once seemed inevitable, streamlining workflows, and focusing our human creativity where it matters most. The risk comes from simply automating existing processes without questioning their fundamental assumptions—making our inefficiencies faster rather than eliminating them.
The challenge ahead isn’t just learning to use AI tools effectively. It’s rethinking what software development can be when many of our traditional constraints no longer apply. This doesn’t mean abandoning our core values—quality, maintainability, collaboration, and more. If anything, it means recommitting to those values while being willing to reimagine how we achieve them.
TL;DR
Traditional software development processes were designed around human limitations that AI can now help overcome. Many of our established workflows exist because humans have limited memory, attention, and processing capacity—not because they’re the optimal way to build software. By understanding which processes truly deliver value and which merely work around human constraints, we can leverage AI to transform our approach rather than just automating existing inefficiencies.