The planning-execution split: why your AI coding workflow is probably wrong
Most developers are using AI assistants inefficiently. Here's how separating planning from execution can 10x your productivity.
The planning-execution split: why your AI coding workflow is probably wrong
If you're like most developers, you're probably using AI coding assistants wrong. Not completely wrong, but inefficiently wrong. You're asking your AI to be both architect and construction worker simultaneously, and that's exactly why your outputs feel generic and your productivity gains plateau at 2x instead of 10x.
The problem with "just build it"
The typical workflow goes something like this:
- Developer: "Build me a REST API for user authentication"
- AI: generates 500 lines of boilerplate
- Developer: copies, pastes, debugs for 2 hours
- Result: Working code that's... fine. Just fine.
The issue? You skipped the most valuable step: planning.
The two-phase approach
Phase 1: Planning (architect mode)
Before writing a single line of code, engage your AI as a staff-level architect:
- "What are the trade-offs between JWT and session-based auth for a React + Node.js app with 100k monthly active users?"
- "Design a database schema that handles soft deletes, audit logs, and GDPR compliance"
- "What's the CAP theorem impact if we use Redis for session storage vs PostgreSQL?"
This phase should produce:
- System design diagrams
- Decision matrices (why X over Y)
- Performance considerations
- Security attack vectors
No code yet. Just strategy.
Phase 2: Execution (builder mode)
Now that you have a battle-tested plan, the AI can execute with precision:
- "Implement the JWT authentication flow we discussed, using the bcrypt hashing strategy from the planning phase"
- "Generate the User model with the schema we designed, including the soft delete timestamp and audit fields"
The difference? The AI isn't guessing. It's following a blueprint you validated.
Real-world example: refactoring a monolith
I recently used this approach to decompose a 50k-line monolith into microservices:
Planning phase (2 hours with AI):
- Identified 7 bounded contexts
- Mapped inter-service dependencies
- Designed event-driven communication patterns
- Calculated blast radius for each service failure
Execution phase (1 week with AI):
- Generated service boilerplates
- Implemented event handlers
- Wrote integration tests
- Migrated data incrementally
Result: What would've taken 3 months solo took 1.5 weeks. And the architecture was cleaner because I forced myself to think before building.
Why this works
-
Cognitive load separation: Planning requires high-level reasoning. Execution requires precision. Humans (and AIs) struggle to do both simultaneously.
-
Iteration is cheaper pre-code: Changing a Mermaid diagram takes 30 seconds. Refactoring 2000 lines of code takes 3 days.
-
AI strengths align: Large language models excel at generating alternatives and exploring trade-offs (planning). They also excel at boilerplate and pattern replication (execution). Mixing the two dilutes both.
The tactical shift
Next time you open your AI coding assistant:
Instead of: "Build a dashboard with charts"
Try: "I need a dashboard for 50 concurrent users showing real-time metrics. Trade-offs: WebSockets vs SSE vs polling. Client: React. Backend: FastAPI. What's the latency/complexity matrix?"
Then: "Implement the SSE approach we decided on, with the reconnection logic and error boundaries we discussed"
The principal engineer mindset
This isn't just an AI trick. It's how senior engineers think. You don't start with npm install. You start with "What problem am I actually solving, and what's the cheapest way to validate my assumptions?"
AI just makes the planning phase conversational instead of isolating.
Try it tomorrow
Take one task from your backlog. Spend 20 minutes in planning mode with your AI. No code. Just decisions. Document the output.
Then execute.
You'll notice two things:
- The AI's output is dramatically better
- You learned something (because you were forced to understand the trade-offs, not just copy-paste)
That's the difference between using AI as a search engine and using it as a force multiplier.
What's your AI coding workflow? Hit me up if you've found patterns that work (or hilariously fail).
Get new posts in your inbox
Architecture, performance, security. No spam.
Keep reading
Inside Claude Code's Context Machine
Claude Code manages your context through three systems: microcompaction, auto-compaction, and structured rehydration. Here's how the machinery actually works, and why most developers burn tokens without realizing it.
AI Made Writing Code Easier. It Made Engineering Harder.
AI accelerates code production but expands scope, raises expectations, and shifts the bottleneck from implementation to judgment. Engineers are doing 2x the work and feeling 10x the burnout.
Software engineers aren't going extinct. The job just got harder.
Boris Cherny says the software engineer title disappears in 2026. He's wrong about the title, right about the shift. Here's what 9 years of production engineering taught me about surviving it.