dispatching-parallel-agents
Use when facing 2+ independent tasks that can be worked on without shared state or sequential dependencies
Dispatching Parallel Agents
Overview
When you have multiple unrelated failures (different test files, different subsystems, different bugs), investigating them sequentially wastes time. Each investigation is independent and can happen in parallel.
Core principle: Dispatch one agent per independent problem domain. Let them work concurrently.
When to Use
digraph when_to_use {
"Multiple failures?" [shape=diamond];
"Are they independent?" [shape=diamond];
"Single agent investigates all" [shape=box];
"One agent per problem domain" [shape=box];
"Can they work in parallel?" [shape=diamond];
"Sequential agents" [shape=box];
"Parallel dispatch" [shape=box]; "Multiple failures?" -> "Are they independent?" [label="yes"];
"Are they independent?" -> "Single agent investigates all" [label="no - related"];
"Are they independent?" -> "Can they work in parallel?" [label="yes"];
"Can they work in parallel?" -> "Parallel dispatch" [label="yes"];
"Can they work in parallel?" -> "Sequential agents" [label="no - shared state"];
}
Use when:
Don't use when:
The Pattern
1. Identify Independent Domains
Group failures by what's broken:
Each domain is independent - fixing tool approval doesn't affect abort tests.
2. Create Focused Agent Tasks
Each agent gets:
3. Dispatch in Parallel
// In Claude Code / AI environment
Task("Fix agent-tool-abort.test.ts failures")
Task("Fix batch-completion-behavior.test.ts failures")
Task("Fix tool-approval-race-conditions.test.ts failures")
// All three run concurrently4. Review and Integrate
When agents return:
Agent Prompt Structure
Good agent prompts are:
Fix the 3 failing tests in src/agents/agent-tool-abort.test.ts:"should abort tool with partial output capture" - expects 'interrupted at' in message
"should handle mixed completed and aborted tools" - fast tool aborted instead of completed
"should properly track pendingToolCount" - expects 3 results but gets 0 These are timing/race condition issues. Your task:
Read the test file and understand what each test verifies
Identify root cause - timing issues or actual bugs?
Fix by:
- Replacing arbitrary timeouts with event-based waiting
- Fixing bugs in abort implementation if found
- Adjusting test expectations if testing changed behaviorDo NOT just increase timeouts - find the real issue.
Return: Summary of what you found and what you fixed.
Common Mistakes
❌ Too broad: "Fix all the tests" - agent gets lost
✅ Specific: "Fix agent-tool-abort.test.ts" - focused scope
❌ No context: "Fix the race condition" - agent doesn't know where
✅ Context: Paste the error messages and test names
❌ No constraints: Agent might refactor everything
✅ Constraints: "Do NOT change production code" or "Fix tests only"
❌ Vague output: "Fix it" - you don't know what changed
✅ Specific: "Return summary of root cause and changes"
When NOT to Use
Related failures: Fixing one might fix others - investigate together first
Need full context: Understanding requires seeing entire system
Exploratory debugging: You don't know what's broken yet
Shared state: Agents would interfere (editing same files, using same resources)
Real Example from Session
Scenario: 6 test failures across 3 files after major refactoring
Failures:
Decision: Independent domains - abort logic separate from batch completion separate from race conditions
Dispatch:
Agent 1 → Fix agent-tool-abort.test.ts
Agent 2 → Fix batch-completion-behavior.test.ts
Agent 3 → Fix tool-approval-race-conditions.test.tsResults:
Integration: All fixes independent, no conflicts, full suite green
Time saved: 3 problems solved in parallel vs sequentially
Key Benefits
Verification
After agents return:
Real-World Impact
From debugging session (2025-10-03):