Coding Together: You're the Boss
Don't Expect to Just Ask and Get It Done
In This Chapter:
- How the coding relationship with AI transforms implementation from typing to directing
- Why clear guidance and judgement remain essential even with powerful AI assistance
- Techniques for effective AI pair programming without surrendering control
- How to tackle complex problems by breaking them down for your AI partner
- The importance of making key decisions permanent beyond the conversation
“I had to completely rewrite everything the AI gave me.”
If you’ve heard colleagues say this (or said it yourself), you’ve witnessed a fundamental misunderstanding of AI-augmented development. The complaint reveals an expectation that AI should deliver perfect, production-ready code after a brief prompt—almost like placing an order at a restaurant and expecting a complete meal to arrive.
This misconception leads to disappointment and the conclusion that “AI isn’t ready for real development work.” But it’s equivalent to hiring a junior developer, giving them vague instructions, providing no feedback, and then complaining when they don’t read your mind.
The reality of effective AI-augmented development looks quite different. At its best, it resembles a dance between partners—one with architectural vision and domain knowledge (you), the other with tireless implementation capabilities and pattern recognition (the AI). You lead, guiding the direction and making key decisions. The AI follows, handling much of the implementation work while responding to your feedback and adjustments.
This chapter explores that dance—how to lead effectively, when to provide detailed guidance, when to give your AI partner room to work, and how to maintain the right balance of control and delegation. We’ll see how coding transforms from a primarily manual activity to a primarily directorial one, where your value comes not from typing speed but from clear thinking, effective guidance, and sound judgement.
We’ve reached the heart of the development process—the actual implementation phase where concepts transform into working code. In previous chapters, we established the foundation through ideation, requirements, and tests. Now it’s time to bring our ideas to life.
The shift in how we implement software with AI assistance is perhaps the most profound change in the entire development lifecycle. It’s not just that coding becomes faster—though it certainly does. It’s that the very nature of the activity transforms. Rather than spending most of your time typing and debugging syntax, you’ll focus on architecture, patterns, edge cases, and overall direction—elevating your work to a higher level of abstraction while your AI partner handles much of the mechanical implementation.
Before we jump into specific techniques, remember this core principle: You remain the boss. The AI is your copilot, not the captain. No matter how impressive its capabilities become, your judgement, your understanding of the broader context, and your responsibility for the final result remain irreplaceable. The goal isn’t to surrender control to AI, but to extend your capabilities through an effective partnership.
Let’s explore what this partnership looks like in practice, starting with a clear-eyed view of what to actually expect from your AI collaborator.
Don’t Expect to Just Ask and Get It Done
“Build me an app.”
If that’s your entire instruction to an AI, you deserve whatever half-baked solution you get back. It’s like walking into an architect’s office and saying, “Design me a house,” without mentioning how many bedrooms you need, your budget constraints, or whether you prefer modern minimalism or Victorian grandeur.
One of the most popular false beliefs in the current AI landscape is what I call the “vending machine fallacy”—the belief that you can insert a brief prompt, press a button, and receive perfectly customised, production-ready code. This misunderstanding isn’t accidental; it’s actively promoted by countless AI tool marketing campaigns promising “Just ask and get it done!” or “Turn your thoughts into code instantly!”
These slogans might sell subscriptions, but they set developers up for disappointment. They fundamentally misrepresent how effective software development works, with or without AI. Good software emerges through exploration, refinement, and thoughtful iteration—not from instantaneous manifestation of vague ideas.
When the reality doesn’t match the marketing hype, developers often conclude that “AI isn’t ready for real work” rather than recognising that their expectations were unrealistic to begin with. It’s not that AI tools don’t work—it’s that they don’t work the way they’re often advertised to work.
Let’s see how this plays out in practice:
Example: The Vending Machine Approach vs. Effective Collaboration
Let’s see the difference between the “vending machine” approach and effective collaboration when building a user authentication system:
**You**: Create a user authentication system for my web application.
**AI**: Here's a simple user authentication system using Express and JWT:
[AI generates code for a basic authentication system with login and register endpoints]
**You**: This doesn't work for our needs. Start over and make it better.
**AI**: I'll create a more robust authentication system:
[AI generates more code with additional features but still missing context]
This approach fails not because AI is incapable, but because software development has never worked this way. Even the smartest human developer would struggle with such vague requirements.
Now let’s see a more effective collaboration:
**You**: We need to implement user authentication for our e-commerce application. We're using React on the frontend and Express/Node on the backend with a MongoDB database. We need the following:
- Email/password login
- Social auth with Google and Facebook
- Password reset functionality
- Account verification via email
- JWT tokens for API authentication
- Role-based permissions (admin, customer, support)
The solution should align with our security requirements of storing hashed passwords and using HTTPS for all communications.
**AI**: I'll help implement this authentication system. Before diving into code, let me confirm a few design decisions:
1. For JWT tokens, would you prefer short-lived access tokens with refresh tokens, or longer-lived single tokens?
2. How should we handle failed login attempts? Do you want account lockouts after multiple failures?
3. For password requirements, do you have specific complexity rules?
4. How should roles be structured - as simple string values or more complex permission objects?
**You**: Good questions. We should use short-lived access tokens (15 minutes) with refresh tokens (7 days). For failed logins, let's lock accounts for 15 minutes after 5 failed attempts. Passwords should be at least 8 characters with at least one uppercase, lowercase, number and special character. Roles should be string values ('admin', 'customer', 'support') but with a separate permissions collection that maps roles to specific permissions.
**AI**: Thanks for the clarification. Here's my proposed approach for the authentication system:
[AI outlines a detailed architecture for the authentication system based on the requirements and clarifications]
Does this approach align with your expectations? I can start implementing specific components once we confirm the overall architecture.
The difference is striking. In the first example, you’re trying to use AI as a mind-reading code vending machine. In the second approach, you’re treating AI as what it actually is: a collaborative developer that needs context, responds to questions, and improves with feedback.
The real power isn’t in magically producing perfect code from vague prompts—it’s in the unprecedented speed and flexibility of the collaborative loop.
You can provide feedback, the AI can implement changes, you can evaluate the results, and the cycle can repeat dozens of times in a single session. This isn’t just “faster coding”—it’s a fundamentally different way to explore the solution space, rapidly iterating toward better outcomes than either humans or AI could achieve alone.
This more effective approach follows several key principles:
1. Provide sufficient context Don’t assume the AI remembers your entire project or understands all your requirements. Reinforce the important details, even if you’ve mentioned them before. Reference specific tests or requirements that should guide the implementation.
2. Break complex tasks into manageable pieces Rather than asking for an entire system at once, focus on well-defined components. “Let’s implement the parser module first” is better than “Build the PDF importer.”
3. Iterate with feedback View the AI’s output as a first draft, not a final product. Provide specific feedback: “The type annotation handling works well, but it’s missing support for Optional types” is more helpful than “This isn’t right.”
4. Guide, don’t just reject When something isn’t working, explain why and suggest a direction. “This approach won’t scale well with large files. Let’s try a streaming parser instead” helps the AI understand your thinking.
5. Be the domain expert Remember that you understand your business needs and constraints in ways the AI can’t. Share that context: “Our users specifically need to see deprecated API methods highlighted in red.”
Don’t fall for the “just ask for what you want” marketing myth. Embrace the reality that good software development has always been a thoughtful, iterative process—AI just makes that process dramatically faster and more fluid.
Being Your Own Tech Lead (Even as a Junior Dev)
Ever watched a senior developer lead a technical discussion? They seem to operate on a different level—thinking in systems rather than lines of code, considering trade-offs that juniors might not even recognise, and making decisions that balance immediate needs with long-term consequences.
This architectural thinking isn’t just a matter of experience—it’s a different mode of operating entirely. And it’s precisely the level at which you need to work when coding with AI.
In traditional development, junior developers typically focus on implementing specific features according to established patterns. They might not fully understand why the codebase is structured a certain way or the implications of different architectural choices. That broader perspective usually comes with years of experience and often formal guidance from tech leads or senior developers.
AI-augmented development flips this dynamic on its head. Even if you’re early in your career, you’re suddenly thrust into a tech lead role—making architectural decisions, establishing patterns, and guiding implementation—while the AI handles much of the execution. It’s like being promoted three levels overnight without the gradual preparation that usually accompanies such advancement.
Let’s see how this plays out in practice:
Example: Making Architectural Decisions with AI
**You**: I need to design a data processing pipeline for IoT sensor data. We'll be collecting temperature, humidity and pressure readings from thousands of devices every 5 minutes. The data needs to be stored and made available for both real-time monitoring dashboards and historical analysis.
**AI**: For this IoT data processing pipeline, we have several architectural options:
1. **Traditional Database Approach**
- Store data directly in a time-series database like InfluxDB or TimescaleDB
- Advantages: Simpler architecture, direct queries, mature technology
- Challenges: Scaling with very high device counts, potentially higher costs
2. **Stream Processing Approach**
- Kafka/Kinesis for data ingestion
- Stream processing with Spark/Flink for real-time analytics
- Data warehouse for historical data
- Advantages: Better scaling, real-time processing capabilities
- Challenges: More complex architecture, higher operational overhead
3. **Lambda Architecture**
- Combine stream processing for real-time views
- Batch processing for accurate historical analysis
- Advantages: Optimized for both real-time and historical use cases
- Challenges: Maintaining dual processing paths
4. **Serverless Event-Driven Approach**
- AWS IoT/Google IoT for device connections
- Functions (Lambda/Cloud Functions) for processing
- Managed services for storage
- Advantages: Low operational overhead, pay-per-use pricing
- Challenges: Potential cold start issues, vendor lock-in
Which direction seems most appropriate for your use case?
**You**: I like the serverless approach since we're still growing and don't want to manage infrastructure. But I'm concerned about cold starts affecting real-time monitoring. Would a hybrid approach make sense - maybe using serverless for ingestion but a small dedicated cluster for real-time processing?
**AI**: A hybrid approach makes excellent sense for your use case. Here's how it could work:
[AI outlines a hybrid architecture combining serverless components with a small dedicated processing cluster]
This architecture gives you the operational benefits of serverless for most components while avoiding cold start issues for real-time monitoring. It also provides a clear path to scale up dedicated components as your device count grows.
Does this align with your thinking? Would you like to explore any particular components in more detail?
This moment—deciding between architectural approaches—is exactly where AI transforms your role. In a traditional team, a senior developer might make this call. With AI, it’s your decision, regardless of your experience level.
Here are strategies to effectively play this tech lead role:
1. Ask clarifying questions before deciding Don’t feel pressured to make architectural decisions immediately. Good tech leads probe deeper before committing to an approach.
**You**: Before we commit to this architecture, I want to understand the trade-offs better. What would be our options for handling device firmware updates in this model? And how would this approach handle devices that go offline for extended periods?
**AI**: [Provides detailed analysis of how the architecture handles these specific scenarios]
**You**: Thanks, that helps clarify things. Let's go with the hybrid approach, but let's make sure we design the data storage to handle out-of-order message delivery since many of our devices operate in areas with poor connectivity.
2. Make decisions explicit Once you have enough information, make clear architectural decisions rather than letting them emerge implicitly. This helps both you and the AI maintain consistency.
3. Establish patterns and conventions early As a tech lead, you’re not just deciding what to build but how to build it. Setting clear patterns helps ensure consistency across components.
**You**: For this project, let's establish some key conventions:
- We'll use TypeScript with strict typing for all components
- Error handling should follow a consistent pattern with structured error objects
- All services should provide health check endpoints
- Configuration should be environment-based with sensible defaults
- Logging should use structured JSON format with consistent field names
Can you keep these conventions in mind as we implement the components?
4. Think about extension points Good tech leads anticipate future needs. Consider where other developers might need to extend your system.
**You**: As we design this system, I want to make sure we can easily add support for new sensor types in the future without modifying existing code. Can you suggest a design pattern that would make this extension easy?
**AI**: [Suggests a strategy pattern or plugin architecture with examples of how new sensor types could be added]
5. Challenge the AI when appropriate Sometimes your AI partner will suggest approaches that don’t align with your goals. Don’t hesitate to redirect.
6. Balance ideal with practical Experienced tech leads know when to compromise. Perfect is the enemy of done.
**You**: While a fully normalised data model might be theoretically better, I think for our use case a denormalised approach would give us better query performance with acceptable storage trade-offs. Let's design the data model that way.
The shift to this tech lead role might feel uncomfortable at first, especially if you’re accustomed to receiving more guidance. The key is recognising that your value isn’t in writing every line of code, but in making thoughtful architectural decisions that guide the implementation.
For junior developers, this represents an unprecedented opportunity for growth. Instead of spending years implementing features before getting a chance to make architectural decisions, you can gain experience with higher-level software design immediately.
Whether junior or senior, embracing this tech lead role is essential for effective AI-augmented development. The AI can write code faster than any human, but it can’t decide what code should be written. That remains your domain, and it’s where your true value lies.
REFLECTION POINT
The balance between delegation and direction is personal to each developer. Reflect on where you tend to fall on this spectrum. Do you provide enough context? Do you verify understanding before implementation? Our Companion AI offers a safe environment to practise these techniques. Ask: “How would you recommend I guide an AI to implement this feature while maintaining control of the architecture?”
Pair Programming Where Your Partner Never Gets Tired
Traditional pair programming has always been a bit like dancing with a human partner who’s constantly glancing at the clock. The benefits—knowledge sharing, fewer bugs, better design decisions—are real, but so are the costs. Your partner gets hungry, tired, or simply bored after hours of staring at the same problem. They have meetings to attend and holidays to take. And sometimes the interpersonal dynamics become more challenging than the technical problems you’re trying to solve.
Enter AI pair programming—all of the benefits with none of the hangry outbursts or awkward silences.
This new form of collaboration transforms the development experience in ways both obvious and subtle. The most apparent change is stamina: your AI partner never needs a coffee break, never gets frustrated with repetitive tasks, and doesn’t start eyeing the door at 4:45 PM. You can pair for fourteen hours straight if that’s your thing (though I don’t recommend it—you still need sleep, even if your partner doesn’t).
But the differences go much deeper than mere endurance.
Example: AI Pair Programming in Action
Let’s see what effective AI pair programming looks like when implementing a data visualisation component:
**You**: I need to implement a dashboard component that visualises user retention data. The data comes in this format:
[
{"cohort":"Jan","month0":100,"month1":88.8,"month2":79.5,"month3":74.2,"month4":68.2,"month5":65.4,"month6":59.4,"totalUsers":2854},
{"cohort":"Feb","month0":100,"month1":89.2,"month2":80.6,"month3":72.1,"month4":65.3,"month5":62.3,"month6":55.7,"totalUsers":2960}
]
I'd like to create a heatmap where each row is a cohort, the columns are the months, and the cells are coloured based on the retention percentage (darker blue for higher values).
**AI**: I'll help you implement this retention heatmap visualization. Before I start coding, let me make sure I understand what we're building:
- Each row represents a user cohort (Jan, Feb, etc.)
- Each column represents months since acquisition (month0, month1, etc.)
- Cell values show the percentage of users still active after that many months
- Colors should be blue with intensity based on the retention value
Is that correct? And are you using any specific UI framework like React, Vue, or vanilla JavaScript?
**You**: Yes, that's correct. We're using React with Tailwind CSS for styling. I'd prefer a solution that doesn't pull in additional visualisation libraries if possible, but if you think a library would be substantially better, I'm open to it.
**AI**: For this visualization, we can create a custom React component using Tailwind CSS without additional libraries. Here's my proposed implementation:
[AI generates code for a React component that renders the retention heatmap using CSS grid and dynamic colour calculation]
This implementation uses CSS grid to create the heatmap structure, with dynamic color calculation based on the retention percentage. The component is self-contained and doesn't require any external visualization libraries.
Would you like me to explain any part of this implementation in more detail?
**You**: This looks good, but I notice a few issues:
1. We should include the total users count somewhere in the display
2. The colour scale is hard to interpret - can we add a legend?
3. I'd prefer if we rounded the percentages to one decimal place
**AI**: Good points! Let me address each issue:
[AI updates the code to include total users, add a colour legend, and round percentages]
I've made the following improvements:
1. Added a column showing total users for each cohort
2. Created a legend showing the color scale from 0% to 100%
3. Updated the formatting to show percentages with one decimal place
The legend uses the same color calculation function as the main heatmap to ensure consistency. Is there anything else you'd like to improve about this visualization?
**You**: This looks much better! One more thing - can we add a tooltip that shows the exact value when hovering over a cell? That would make it easier to see precise numbers.
**AI**: [Updates the code to add tooltips with detailed information on hover]
I've added tooltips that appear when hovering over each cell. The tooltip shows:
- The cohort name
- The month number
- The exact retention percentage
- The absolute number of users retained
The tooltips are positioned above the hovered cell and fade in/out smoothly. I've made them accessible by using proper ARIA attributes as well.
Is this what you had in mind for the tooltips?
This interaction illustrates several key aspects of effective AI pair programming:
1. Initiative balance shifts Unlike human pairs where initiative typically switches back and forth, with AI you’ll generally maintain leadership throughout the session. The AI might suggest approaches or ask clarifying questions, but you remain the primary decision maker. This requires more sustained engagement from you but also gives you more consistent control over direction.
2. Feedback loops shorten dramatically With human pair programming, there’s usually a meaningful delay between suggesting a change and seeing its implementation. With AI, that cycle collapses to seconds. This acceleration enables a more exploratory approach—you can try ideas, evaluate them, and pivot quickly when needed.
3. Explanation becomes more important than watching In traditional pairing, the navigator often learns by watching the driver implement solutions. With AI, the learning happens differently—through the explanations you receive about code the AI has generated.
**You**: Can you explain how you're calculating the cell background colours? I'm not familiar with that approach.
**AI**: [Explains the colour calculation algorithm in detail with examples]
To maximise learning, ask for explanations of approaches you don’t fully understand, alternative implementations the AI considered, or the reasoning behind specific design choices.
4. Knowledge gaps manifest differently With human pair programming, knowledge gaps typically emerge through conversation—“I’m not familiar with that API” or “I’ve never implemented that pattern before.” With AI, gaps appear more subtly, often through generated code that doesn’t quite match your expectations.
This requires a different kind of attention. You need to review AI-generated code carefully, looking for misunderstandings or incorrect assumptions. When you spot these issues, take time to provide explicit clarification rather than just requesting changes.
5. The pair’s memory becomes your responsibility Human pairs naturally build a shared context through continued conversation. With AI, context management becomes primarily your responsibility. You may need to remind your AI partner of decisions made earlier in the session or refer back to specific aspects of your requirements.
Despite these differences, the fundamental benefit of pair programming remains: two minds are better than one, even when one of them is artificial. Your AI partner will spot patterns you might miss, suggest alternatives you hadn’t considered, and help implement solutions more robustly than either of you might achieve alone.
The result is often higher quality code in less time—a pair programming experience where you get all the benefits without the scheduling headaches or interpersonal complications.
Review, Improve, Repeat—Without the Ego
Let’s face it: receiving criticism about your code can sting, even from the most tactful reviewer. We’ve all been there—defensively explaining our brilliant algorithm while secretly plotting to “accidentally” spill coffee on the keyboard of whoever dared suggest our variable naming could use work. Despite knowing feedback improves our code, our primate brains still register criticism as a social threat.
This is where AI code collaboration truly shines. Your AI partner offers something unprecedented in software development: judgement-free, ego-free collaboration. It doesn’t care if this is your first refactoring or your five-hundredth. It won’t remember that embarrassing bug you introduced last week or subtly remind you about “that time you argued for two hours that your solution was optimal.”
This psychological safety transforms the review and improvement process from something many developers quietly dread into a genuinely productive conversation.
Example: Ego-Free Code Review and Improvement
**You**: Can you review this authentication middleware function I wrote? I'm not sure if it's handling all the edge cases properly:
function authMiddleware(req, res, next) {
const token = req.headers.authorization?.split(' ')[1];
if (!token) {
return res.status(401).json({ message: 'No token provided' });
}
try {
const decoded = jwt.verify(token, process.env.JWT_SECRET);
req.user = decoded;
next();
} catch (error) {
return res.status(401).json({ message: 'Invalid token' });
}
}
**AI**: I've reviewed your authentication middleware function. It's a good start with the core functionality in place, but I see several areas for improvement:
**Strengths:**
- Correctly extracts the token from Authorization header
- Properly verifies the token and attaches user data to the request
- Has basic error handling for missing or invalid tokens
**Improvement Opportunities:**
[AI describes several improvements of the given code]
Here's an improved version:
[AI generates improved authentication middleware with detailed comments explaining each enhancement]
Would you like me to explain any of these improvements in more detail?
**You**: This is really helpful! I hadn't considered the different types of JWT errors or the caching issue. Can you elaborate on the security headers you added? I'm not familiar with all of them.
**AI**: [Provides detailed explanation of security headers and their importance]
These security headers help protect against various common vulnerabilities in web applications. The Cache-Control header is particularly important for authentication endpoints to prevent tokens from being stored in browser caches or proxies.
**You**: Great explanation. One more question - I noticed you added a check for the token expiration date even though jwt.verify already checks this. Is there a reason for the redundancy?
**AI**: That's an excellent observation! You're right that jwt.verify does check for token expiration and will throw a TokenExpiredError if the token has expired.
The additional expiration check I added serves a different purpose - it detects tokens that are getting close to expiration (in this case, within 15 minutes) but haven't expired yet. This enables "silent refresh" functionality, where your frontend can proactively get a new token before the current one expires, creating a smoother user experience.
However, if you're not implementing silent refresh, this is indeed redundant and could be removed. Would you prefer to keep this proactive check or remove it to simplify the code?
What’s remarkable about this exchange isn’t just the technical improvements—it’s the psychological dynamics. There’s no defensiveness, no social awkwardness, no concern about looking inexperienced. Just two minds focused on making the code better.
This ego-free collaboration enables several powerful patterns:
1. Rapid, iterative improvement
Rather than the start-stop rhythm of traditional code reviews (write code, submit for review, wait, address comments, repeat), AI collaboration creates a continuous improvement cycle. You can move through dozens of refinement iterations in a single session, with each version building on the insights from the previous one.
2. Learning without embarrassment
We all have knowledge gaps, but admitting them can be difficult, especially for junior developers or those new to a technology. With AI, you can ask “naive” questions without fear of judgement.
3. Code as a conversation, not a commitment
Traditional development often treats code as a statement—something you commit to and defend. With AI collaboration, code becomes more like a conversation—fluid, exploratory, and evolving.
4. Comprehensive improvement, not just spot fixes
Human reviewers often focus on localised issues—a variable name here, a missing check there. AI can provide this detailed feedback but can also suggest broader structural improvements. It can rewrite entire functions or propose alternative designs without the social awkwardness that might come from a human suggesting such extensive changes.
5. Focus on the code, not the coder
Perhaps most importantly, AI collaboration fundamentally changes where attention falls during review. With human reviews, it’s nearly impossible to completely separate the code from the person who wrote it. With AI, the focus stays entirely on the quality of the solution.
The cumulative effect of these dynamics is powerful: more iterations, deeper improvements, greater learning, and less emotional friction. You’ll likely find yourself refining code more thoroughly than you would with traditional processes, not because the AI gives better feedback than humans (though sometimes it might), but because the psychological barriers to continuous improvement disappear.
Tackling Complex Problems Without the Headaches
We’ve all been there. Staring at the monitor, sipping cold coffee, trying to wrap our brain around a genuinely complex problem. Maybe it’s implementing a sophisticated search algorithm, building a state machine for a multi-step process, or untangling a maze of business rules. In traditional development, these problems often become multi-day journeys of frustration, Stack Overflow searches, and the occasional existential crisis.
This is where AI collaboration truly shines. Complex problems remain complex—AI can’t magically make difficult things simple—but the way you approach them transforms dramatically.
Example: Breaking Down Complex Problem Implementation
Let’s see how this works with a real-world example - implementing a comprehensive permission system:
**You**: I need to design and implement a role-based access control system for our application. The requirements are:
1. Users can have multiple roles
2. Roles have multiple permissions
3. Permissions can be resource-specific (e.g., can edit Document A but not Document B)
4. There should be both UI permissions and API-level enforcement
5. We need to support permission inheritance (e.g., admin inherits all editor permissions)
6. The system should be efficient - not checking permissions for every UI element or API call
This feels overwhelming. How should I approach this?
**AI**: You're right that this is a complex problem, but we can break it down into manageable components. Let's start with a high-level architecture for the permission system:
**Core Components:**
1. **Data Model**
- Users, Roles, Permissions, and Resources
- Many-to-many relationships between these entities
- Inheritance hierarchies for roles
2. **Permission Evaluation Engine**
- Efficient algorithm to determine if a user has a specific permission
- Caching strategy to avoid repeated database queries
- Support for resource-specific checks
3. **API Integration Layer**
- Middleware for API route protection
- Decorator/annotation system for controllers
- Context-aware permission checks
4. **UI Permission System**
- Component-level permission rendering
- Efficient client-side permission checking
- Syncing permission state between client and server
Would you like to focus on one of these components first and work through it step by step?
**You**: Let's start with the Permission Evaluation Engine. That seems like the core that everything else will depend on.
**AI**: Good choice. The Permission Evaluation Engine is indeed the core component. Here's how we can approach it:
**Permission Evaluation Engine Design:**
1. **Core Function**: `hasPermission(userId, permissionName, resourceId = null)`
This function will check if a user has a specific permission, optionally for a specific resource.
2. **Evaluation Algorithm**:
- Get all roles for the user
- Get all permissions for those roles (including inherited permissions)
- If resourceId is provided, check resource-specific permissions
- Return true/false based on the evaluation
3. **Optimization Strategies**:
- Cache user roles to avoid repeated database queries
- Pre-calculate permission inheritance hierarchies
- Use bitwise operations for simpler permissions if possible
- Implement a permission map for O(1) lookups
Let me outline a pseudocode implementation of this engine:
[AI provides pseudocode for the permission evaluation engine]
How does this approach sound? Would you like me to focus on any particular aspect of this design?
**You**: This looks good conceptually. I think the caching strategy will be crucial for performance. Can you elaborate on how we should implement caching without causing issues when permissions change?
**AI**: [Provides detailed explanation of caching strategies with invalidation approaches]
**You**: That makes sense. Now let's think about how this would integrate with our Express API routes. What would the middleware look like?
**AI**: [Designs API middleware that uses the permission engine]
This example demonstrates how breaking down a complex problem into manageable pieces makes progress possible. The AI helps structure the approach, focusing on one component at a time while maintaining sight of how everything fits together.
Throughout this process, the AI serves as both an implementation partner and a complexity manager. It helps break down the problem, keeps track of the overall architecture while you focus on specific components, suggests implementations that align with established patterns, and identifies edge cases you might have missed.
The result isn’t just a working solution—it’s a well-structured, maintainable solution that addresses the inherent complexity of the problem. And you’ve achieved this without the usual headaches of multi-day debugging sessions, confusion about how components should interact, or the sinking feeling that comes from realising your approach won’t work after days of effort.
Complex problems remain complex, but with AI collaboration, they become manageable challenges rather than intimidating obstacles.
Remember to Make Decisions Permanent
In the rapid flow of AI collaboration, it’s easy to fall into a deceptively comfortable trap: treating your conversations as the ultimate record of decisions, only to discover later that critical design choices vanish like footprints on a beach when the tide comes in.
I’ve seen this pattern repeat across teams adopting AI-augmented development. Engineers have brilliant sessions with their AI partners, making thoughtful architectural decisions, establishing naming conventions, and defining component boundaries. Later, when revisiting the code or bringing in teammates, they discover a surprising problem: nobody can remember exactly why they designed things this way or what alternatives they considered and rejected.
This amnesia happens because AI conversations create an illusion of permanence. The discussion feels so thorough, so detailed, that surely you’ll remember the reasoning later. But human memory fades, and AI memories are often limited by context windows or session boundaries. Without explicit effort to make important decisions permanent, your carefully considered architecture becomes a mysterious artefact that future-you will stare at, wondering “what was I thinking?”
Example: Documenting Architectural Decisions
**You**: We've decided to use a microservices architecture for our e-commerce platform with separate services for:
- Product catalogue
- User accounts
- Order processing
- Payment handling
- Inventory management
Services will communicate via message queues for asynchronous operations and REST APIs for synchronous needs. We'll use MongoDB for the catalogue and PostgreSQL for accounts, orders and inventory.
Can you help me document this architecture decision and the reasoning behind it? I want to make sure we capture why we made these choices.
**AI**: Here's a structured Architecture Decision Record (ADR) documenting your microservices decisions:
# ADR-001: Microservices Architecture for E-Commerce Platform
## Context
Our e-commerce platform needs to support high scalability, independent development teams, and different data storage requirements across components. We need to balance development velocity with operational complexity.
## Decision
We will implement a microservices architecture with these service boundaries:
- Product catalogue (MongoDB)
- User accounts (PostgreSQL)
- Order processing (PostgreSQL)
- Payment handling (PostgreSQL)
- Inventory management (PostgreSQL)
Services will communicate via:
- Message queues for asynchronous operations
- REST APIs for synchronous operations
## Rationale
- **Separate service boundaries** allow teams to develop and deploy independently
- **MongoDB for catalogue** supports the flexible schema needed for varied product types and attributes
- **PostgreSQL for transactional services** provides ACID compliance for financial and inventory data
- **Message queues** enable resilient communication and eventual consistency where appropriate
- **REST APIs** provide simple integration for synchronous operations where immediate responses are required
- **Separation of payment handling** improves security isolation and compliance
- **Independent inventory management** allows for specialised optimisation of stock calculations
## Consequences
- **Increased operational complexity** will require investment in monitoring, deployment pipelines, and service discovery
- **Data consistency** will require careful design of cross-service transactions
- **Developer onboarding** may take longer as developers need to understand distributed system concepts
- **Performance overhead** from network calls between services must be monitored
- **Independent scaling** will allow cost optimisation for different traffic patterns
## Alternatives Considered
- **Monolithic architecture**: Rejected due to scaling limitations and difficulty supporting multiple teams
- **BFF pattern with monolith**: Considered but insufficient for our team structure and deployment needs
- **Serverless architecture**: Evaluated but deemed premature given our operations team's current expertise
This ADR should be reviewed after 6 months to validate if the benefits are being realized and the consequences properly managed.
**You**: This is excellent. I'd like to add one more section about data ownership - we should document which service is the source of truth for different data entities.
**AI**: Great point! Here's an additional section for your ADR:
## Data Ownership
Each service will be the authoritative source of truth for specific data domains:
- **Product Catalogue Service**:
- Product information
- Categories
- Product media/assets
- Pricing
- **User Account Service**:
- Customer profiles
- Authentication credentials
- Preferences
- Address book
- **Order Processing Service**:
- Orders
- Order history
- Shipping details
- Order status
- **Payment Handling Service**:
- Payment methods
- Transaction records
- Refund history
- Payment status
- **Inventory Management Service**:
- Stock levels
- Warehouse information
- Supplier data
- Restock thresholds
Services requiring data outside their domain will either:
1. Maintain a read-only replica with eventual consistency
2. Query the authoritative service via API when fresh data is critical
3. Subscribe to change events for critical cross-domain updates
This clear delineation of data ownership will reduce ambiguity, prevent data duplication issues, and establish clear boundaries of responsibility.
By creating this formal architecture decision record, you’ve transformed temporary conversation into permanent documentation that explains not just what decisions were made, but why they were made and what alternatives were considered. This documentation serves both your future self and anyone else who works on the system.
Here are concrete strategies to make critical decisions permanent:
1. Document decisions within the code itself The most reliable place to record architectural decisions is within the code they affect. Comments at the module or class level can explain not just what a component does, but why it was designed that way.
2. Create dedicated architecture decision records (ADRs) For significant architectural decisions, create formal decision records in your repository. These documents capture the context, options considered, decision reached, and consequences in a structured format.
3. Integrate decisions into PR descriptions and commit messages Pull requests and meaningful commit messages provide another way to document significant decisions. When creating a PR for a new component, include a summary of key design decisions and alternatives considered.
4. Convert key decisions into test cases Tests don’t just verify functionality—they can also document design decisions by making expectations explicit. When you make a significant design choice, consider capturing it in a test that would fail if that decision were reversed.
5. Schedule architecture review sessions Even with written documentation, some context inevitably gets lost. Schedule periodic architecture review sessions where team members present key components and explain the reasoning behind their design. Record these sessions or take detailed notes to preserve the discussions.
6. Teach your AI about your architecture As you make and document architectural decisions, teach them to your AI partner. Start new sessions by summarising key architectural principles and previous decisions. This creates valuable context for the current conversation and helps ensure that new code aligns with established patterns.
The practices described above may seem like additional work in the moment—after all, you and your AI partner both understand the decisions during your conversation. But this small investment in permanence pays enormous dividends when you or others need to understand, maintain, or extend the code in the future.
Remember: code outlives conversations. Make your important decisions permanent.
Conclusion
Throughout this chapter, we’ve explored how AI transforms the coding experience from a solitary typing exercise to a directed collaboration. You’ve seen how to guide your AI partner effectively, providing context and direction while maintaining control over architectural decisions and implementation direction. The days of writing every line of code yourself are giving way to a more strategic role where your judgement, clarity of thought, and ability to communicate your intentions become your most valuable skills.
This shift doesn’t diminish your importance as a developer—it elevates it. You’re no longer constrained by typing speed or syntax recall; you’re free to focus on the elements of development that truly require human insight: understanding user needs, making architectural decisions, identifying edge cases, and ensuring the solution truly solves the right problem. The tedious implementation details that once consumed so much of your time can now be delegated, allowing you to operate at a higher level of abstraction.
Perhaps most importantly, this partnership transforms how you approach complex problems. Rather than being intimidated by large, intricate challenges, you can break them down into manageable pieces, explore multiple approaches quickly, and refine solutions through rapid iteration. The psychological barriers that often accompany difficult coding tasks—the blank-page syndrome, the fear of getting started, the frustration of debugging—all diminish when you have a tireless partner ready to help implement your ideas.
Remember that effective collaboration requires thoughtful communication. Be explicit about your requirements, provide sufficient context, break problems into manageable pieces, and document important decisions. Your AI partner is remarkably capable but ultimately follows your lead—the clearer your direction, the better the results.
MAINTAINING CONTEXT
When moving from Coding to Frontend Development, bring forward:
• Your Ideation Summary for user experience goals
• Your Requirements Summary for functional expectations
• Your Test Cases for behaviour verification
• The core functionality implemented in your backend
• API contracts and data structures
• Key architectural decisions that affect the frontend
TIP: Your context chain now includes four linked summaries. Each builds upon the previous layers, creating a rich tapestry of understanding that ensures frontend decisions align with original intentions and technical implementations.
TL;DR
AI transforms coding from a typing exercise to a guidance and decision-making process. You function as a tech lead—directing implementation, making architectural decisions, and focusing on higher-level concerns—while the AI handles implementation details. This partnership works best when you provide clear context, iterate with feedback, break problems into manageable pieces, and document important decisions. The result isn’t just faster coding but a fundamentally different approach to software development that emphasises your judgement and strategic thinking over mechanical implementation skills.