Chapter 4

Everything Changes: The New Math of Software Development

How AI dramatically reduces implementation costs, changing fundamental trade-offs

In This Chapter:

  • How AI dramatically reduces implementation costs, changing fundamental trade-offs
  • Why feature development becomes easier, allowing for greater ambition and experimentation
  • When starting fresh is no longer a “crazy idea” but a viable option
  • How technical debt becomes a choice rather than an inevitability
  • Why traditional decision-making frameworks need recalibration for the AI era

Software developers have always lived with trade-offs. Want something fast and good? It won’t be cheap. Need it cheap and fast? Don’t expect quality. Want good and cheap? You’ll be waiting a while. These weren’t just annoying limitations—they shaped everything about how we build software.

Every line of code meant human fingers typing, human brains debugging, and human eyes reviewing. We built our entire approach to software around these constraints, treating them as unchangeable facts rather than temporary technological limitations.

But now AI has entered the chat, and suddenly those “unchangeable facts” don’t look so permanent. When code can be generated and refined at speeds no human could match, when implementation costs drop dramatically while design decisions still need human insight—the fundamental maths of software development transforms.

In this chapter, we’ll explore how AI reshapes what’s sensible, doable, and worthwhile in software development. It’s like what happened when the printing press replaced scribes—the whole industry changes not because the value of software is different, but because the cost of creating it has fundamentally shifted.

Why Implementation Just Got Much Cheaper

Remember your first real programming experience? Mine involved building a calculator app in Pascal, hunched over physical books, laboriously deciphering syntax and control flow without the internet to guide me. Years later, I repeated the exercise in Java, slightly easier but still a significant undertaking. The cost of implementing even the simplest features was enormous, measured in human hours spent typing, testing, fixing, and occasionally shouting at inanimate objects.

Fast forward to today. That same calculator app? With AI assistance, a complete beginner could implement it in minutes, not days or weeks. The cognitive load of remembering syntax, finding the right methods, and structuring the code has been largely offloaded to AI. Implementation—the act of translating an idea into working code—has become radically cheaper.

This cost reduction isn’t merely quantitative—it’s qualitative, changing the very nature of development decisions. Think about the calculations you make every day as a developer:

“Should we refactor this messy function?” “Is it worth implementing this nice-to-have feature?” “Do we have time to add better error handling?” “Should we write more tests?”

Each of these questions has traditionally been answered through a cost-benefit analysis that weighs the value of the change against the developer time required. And because developer time was expensive, many worthwhile improvements would fail this test—not because they weren’t valuable, but because implementation costs were too high.

Now recalculate those same decisions with AI assistance. That refactoring that would have taken three days? Now it might take three hours. The nice-to-have feature that wasn’t worth a week of coding? It might now be worth an afternoon. The comprehensive error handling that seemed too time-consuming? Now it can be generated alongside the main functionality with minimal additional effort.

This shift fundamentally alters the ROI calculations for nearly every development decision. Features that would have been cut from scope become viable. Quality improvements that would have been deferred become affordable. Experiments that would have been too costly to try become practical learning opportunities.

Consider testing—a classic casualty of tight deadlines. Everyone agrees tests are valuable, but when time is short, they’re often the first thing cut. “We’ll add tests later,” says the team, knowing full well that “later” rarely comes. With AI, you can generate comprehensive test suites alongside your implementation, often with minimal additional effort. The economic barrier to good testing practices largely disappears.

Or take documentation, that perpetually neglected stepchild of software development. Writing clear, comprehensive docs has always competed with coding for developer time, and coding nearly always wins. But AI can generate solid first drafts of documentation based on your code and conversation, making the incremental cost of good documentation dramatically lower. Suddenly, “we don’t have time for docs” becomes a much weaker argument.

Interestingly, languages once considered “harder” for humans—like C, Rust, and Java—actually produce better results when working with AI than more “flexible” languages like JavaScript. The strict typing, explicit memory management, and clear structures in these languages give the AI more precise patterns to work with. The very constraints that made these languages challenging for humans to master unaided now serve as guardrails that help AIs generate more reliable, secure code. The traditional learning curve flattens considerably when you have an AI collaborator that can handle the syntactic complexities while you focus on the conceptual understanding.

The economics of bugs and mistakes change dramatically too. When implementation is quick but debugging remains time-consuming, preventing bugs becomes even more valuable than before. Here again, AI shifts the equation by helping catch potential issues early and suggesting fixes quickly. Those stricter languages also help by surfacing issues at compile time rather than runtime. Errors that might have derailed a project for days can often be resolved in minutes with the right AI guidance, turning what used to be productivity killers into minor speedbumps.

All of this points to a profound shift in how we allocate the scarce resource of developer attention. When implementation costs drop, the relative importance of thinking about what to build increases. More of your time can be spent on problem understanding, user research, and architectural decisions—the areas where humans still vastly outperform AI.

To be clear, implementation hasn’t become free. Writing good software still requires human judgement, domain knowledge, and problem-solving skills. But the mechanical aspects of translating ideas into code—remembering syntax, writing boilerplate, implementing standard patterns—has become dramatically cheaper. And when a major cost component drops so significantly, everything about how you make decisions needs to be recalibrated.

Smart developers and teams are already adjusting to this new reality. They’re raising their ambitions, expanding their definition of “must-have” vs. “nice-to-have,” and spending more time ensuring they’re building the right thing rather than just building the thing right. Those still using the old calculations—treating implementation as if it’s as expensive as it was in 2020—are leaving enormous value on the table.

In the next sections, we’ll explore how this cost shift affects specific aspects of development, from feature decisions to refactoring to starting fresh. Throughout, remember this core insight: when implementation gets cheaper, you need new maths to make good decisions.

When Adding Features Becomes Easier Than Ever

Product management meetings used to follow a predictable script:

“We really need to add this feature.” “The developers say it’ll take three sprints.” “We don’t have three sprints. What can we cut?”

And so began the painful negotiation between what users wanted, what the business needed, and what developers could reasonably deliver. Feature prioritisation wasn’t just important—it was existential.

With AI-augmented development, this calculation changes dramatically. When implementation speed increases by 5-10x (conservative for developers who’ve mastered AI collaboration), the conversation shifts from “what must we cut?” to “what else can we include?”

Let me share a recent experience: Our team kicked off a project in late 2024 with phases mapped out quarterly through 2025. After adopting AI workflow, we completed the first quarter’s work in just six weeks. The second quarter’s planned features? Knocked out in the following two weeks. By the third month, we were completely redesigning the frontend and adding several new features that weren’t planned until late 2025. The constraints that forced painful trade-offs simply disappeared.

This shift creates both opportunities and challenges. The opportunity is obvious: deliver more value sooner. But the challenge is subtle: when implementation constraints loosen, decision-making becomes harder, not easier.

Without the harsh discipline imposed by technical constraints, teams need to develop new muscles for determining what should be built, not just what can be built. The question shifts from “Do we have time for this feature?” to “Does this feature actually belong in our product?”

This requires stronger product thinking and more deliberate design. When almost any feature seems feasible, the risk of bloated, unfocused products increases. The best teams are responding by bringing more rigour to user research and being more intentional about what they choose not to build, even when they could.

There’s another fascinating effect: the ability to test ideas that would previously have been too expensive to try. Now, teams can implement speculative features quickly, get them in front of users, and decide based on actual feedback rather than theoretical discussions.

This faster feedback loop fundamentally changes product strategy. Instead of extensive up-front planning, teams can adopt a more evolutionary approach—trying more ideas, keeping what works, and discarding what doesn’t.

For startups, this shift is particularly powerful. The ability to iterate through more versions quickly increases the odds of finding product-market fit before running out of resources.

Even niche products benefit. Previously, specialised software often lacked polish because the market wasn’t large enough to justify extensive development investment. With lower implementation costs, even products serving smaller markets can justify a more complete experience.

The net effect is a shift from scarcity-based thinking to abundance-based thinking, where the primary constraint becomes our ability to imagine and design great features, not our ability to implement them.

Starting Fresh: No Longer a Crazy Idea

“We should rewrite it from scratch.”

These words have triggered cold sweats in engineering managers for decades. The “complete rewrite” has an infamous reputation in software development, and for good reason. The stories are legendary: teams disappearing for years, budgets exploding, and projects that either never ship or arrive missing critical features from the original system.

Joel Spolsky’s 2000 essay “Things You Should Never Do” cemented the conventional wisdom: rewrites are almost always a mistake. The accumulated knowledge embedded in an existing codebase—even a messy one—represents years of edge cases, bug fixes, and business logic refinements that are nearly impossible to recreate from a blank slate without repeating all the same painful lessons.

This wisdom wasn’t wrong. In a world where every line of code required substantial human effort, the cost of reproducing functionality was prohibitively high. The maths simply didn’t work. Spending three years to rebuild what you already had, just to end up in roughly the same place (but with nicer code), rarely made business sense.

But AI changes this equation dramatically. When implementation costs plummet, suddenly “starting fresh” enters the realm of reasonable options.

With AI assistance, teams can take a different approach. They can define the core functionality, capture the essential business rules, and rewrite entire systems in weeks rather than years. Not partial rewrites or minimum versions—complete replacements with every feature intact and significant improvements added. These timelines would have been unthinkable in the pre-AI era. But when implementation costs drop by an order of magnitude, the maths of rewriting changes completely.

The key insight is that business logic and requirements—the truly valuable parts of a system—can now be transferred to new implementations without paying the full “recreation cost” that made rewrites so expensive. You’re no longer paying to rediscover and reimplement every detail; you’re paying for the judgement to decide what to keep, what to improve, and how to structure the new system.

Crucially, the better your “why” documentation and test cases, the cheaper the reimplementation becomes. Well-documented business rules, clear explanations of design decisions, and comprehensive tests that capture expected behaviour all become invaluable assets when rewriting with AI. They provide the essential context that allows AI to generate appropriate implementations while preserving the accumulated wisdom from the original system. What once seemed like documentation overhead now becomes a strategic investment in future flexibility.

This doesn’t mean every legacy system should immediately be rewritten. The decision still requires careful consideration of factors beyond just implementation cost:

  • How well is the existing system understood?
  • How much of the current behaviour is actually desired vs. accidental?
  • Are there critical integrations that would be difficult to replicate?
  • What’s the impact of system downtime or transition periods?
  • Does your team have the domain knowledge to guide a successful rewrite?

But for many systems, particularly those where the core functionality is well-understood but the implementation has become a maintenance nightmare, the option to start fresh has moved from “career-limiting move” to “legitimate strategic choice.”

This shift extends beyond complete rewrites to component-level replacements as well. That dreadful payment processing module everyone’s afraid to touch? That reporting engine built by someone who left five years ago? The performance-critical algorithm no one fully understands? All of these become candidates for clean-slate replacement when implementation costs drop dramatically.

Perhaps most significantly, the reduced cost of starting fresh changes how we think about technical debt. In the traditional world, accumulated technical debt could become so overwhelming that systems effectively became “too big to replace”—organisations were stuck maintaining increasingly dysfunctional codebases because the cost of alternatives was prohibitive.

Now, technical debt still matters, but its power to hold organisations hostage diminishes. When the cost of replacement drops, the leverage shifts. Teams can more credibly say “If this becomes too painful to maintain, we can replace it,” changing the calculation about how much technical debt is tolerable and for how long.

This newfound freedom to start fresh also impacts how we approach architecture. When rebuilding becomes more feasible, architectures can be more experimental and evolutionary. You can try approaches knowing that if they don’t work out, course-correction won’t be prohibitively expensive. This encourages both more innovative initial designs and more willingness to admit when an approach isn’t working.

The stigma around large-scale rewrites may take time to fade, as it’s deeply ingrained in software development culture. But the economics that created that stigma have fundamentally changed. Teams that recognise this shift gain a powerful new option in their strategic toolkit—the ability to periodically refresh their technical foundations without betting the company on each renewal.

Goodbye to Unnecessary Overheads and Technical Debt

“There is nothing as permanent as a temporary solution.”

This old developer joke rings painfully true for anyone who’s inherited legacy systems. Those quick fixes, workarounds, and “we’ll improve it later” compromises have a strange way of outliving the developers who created them, sometimes by decades. What starts as technical debt ends up as technical mortgage—a long-term commitment that keeps teams paying interest far into the future.

In traditional software development, these compromises were often unavoidable. With limited resources and pressing deadlines, teams made rational trade-offs: implement features now, improve the underlying architecture later. Except “later” rarely comes. New priorities always emerged, pushing those cleanup tasks perpetually to the bottom of the backlog.

The result? Systems accumulating layer upon layer of technical debt until they became nearly unmaintainable—fragile, slow, and resistant to change. But teams continued maintaining them because the cost of alternatives seemed even higher.

AI fundamentally changes this equation by dramatically reducing the cost of both prevention and cure.

On the prevention side, maintaining high code quality no longer requires the same trade-offs against development speed. When AI helps you implement features, you can simultaneously maintain cleaner architecture, better patterns, and more robust error handling—all without significantly extending timelines. The “quick and dirty” approach loses much of its appeal when “quick and clean” becomes nearly as fast.

Consider common sources of technical debt like duplicate code, insufficient error handling, missing tests, or inconsistent patterns. In traditional development, fixing these issues meant explicitly allocating scarce developer time—time that could otherwise go toward new features. With AI assistance, you can address these quality concerns almost as a side effect of normal development, embedding better practices into every new feature without the traditional speed penalty.

Even better, AI can help identify and fix existing technical debt as you work. Refactoring that once required days of careful manual work now takes hours with AI assistance. Code that made sense only to its original author can be transformed into clear, well-documented implementations without the archaeological expedition that such work once required.

This shift has profound implications for how organisations approach software quality. When the cost of doing things right dramatically decreases, the excuses for cutting corners lose their economic basis. Teams can maintain higher standards without sacrificing velocity, making “move fast and break things” feel like an unnecessary risk rather than a productivity necessity.

Beyond just code-level debt, AI reduces many organisational overheads that teams have traditionally accepted as unavoidable costs of software development:

  • Documentation debt: Comprehensive documentation no longer competes with coding time when AI can generate and maintain docs alongside the code itself.

  • Knowledge silos: When specialised knowledge can be quickly captured and transferred via AI, teams depend less on any individual’s tribal knowledge.

  • Onboarding friction: New team members can become productive faster when AI helps them understand codebases and implement features in unfamiliar territory.

  • Learning curve penalties: Adopting better technologies no longer imposes the same productivity tax when AI can guide developers through unfamiliar patterns and APIs.

These overhead reductions compound over time. Teams spend less energy on struggles with legacy code, internal friction, and context preservation, freeing more capacity for meaningful work that delivers actual value.

Perhaps most significantly, the nature of the conversation around technical debt changes. In traditional environments, architects and senior developers often fought uphill battles for cleanup time, having to justify “non-feature work” to business stakeholders who saw limited immediate value. With AI assistance, the distinction between “feature work” and “cleanup work” blurs—teams can often deliver both simultaneously, making trade-off discussions less contentious.

This doesn’t mean technical debt will magically disappear. Poor architectural decisions, tangled dependencies, and confusing business logic can still accumulate. But the means to address these issues and the reasons that drive them have fundamentally changed. Technical debt becomes less of an inevitability and more of a choice—a choice with fewer justifications than ever before.

For teams adapting to this new reality, the implications are clear: accept fewer compromises, establish higher baseline standards, and invest consistently in keeping your systems clean rather than letting technical debt accumulate. The “we’ll fix it later” promise can actually be kept when “later” requires hours instead of weeks.

Those “temporary” solutions might finally be temporary after all.

Conclusion

Throughout this chapter, we’ve explored how AI fundamentally changes the maths of software development by dramatically reducing implementation costs. This shift transforms what’s possible, practical, and worthwhile—not just making development faster, but qualitatively changing how we approach building software.

The industry’s centre of gravity shifts from “how do we build it?” to “what should we build?”—from execution to strategy. Developers spend more time on high-value activities: understanding problems deeply, designing solutions, and making architectural decisions. Teams deliver more value without compromising quality or accumulating technical debt.

Adapting requires recalibrating how we make decisions and measure success. Old calculations about what’s “too expensive” or “not worth it” need thoughtful reconsideration.

Those who embrace this shift will find themselves with significant advantages. Those who continue operating under old constraints, treating implementation as if it’s as expensive as it was in 2020, will increasingly find themselves left behind.

TL;DR

AI fundamentally changes the maths of software development, making previously expensive decisions more feasible. Implementation costs drop dramatically, allowing teams to add more features, maintain higher quality standards, and even consider fresh starts without traditional trade-offs. This shift requires new decision-making approaches that recognise these changed costs. Technical debt becomes less of an inevitability and more of a choice as the cost of both prevention and fix decreases significantly.