Use this skill when
Working on tdd workflows tdd cycle tasks or workflowsNeeding guidance, best practices, or checklists for tdd workflows tdd cycleDo not use this skill when
The task is unrelated to tdd workflows tdd cycleYou need a different domain or tool outside this scopeInstructions
Clarify goals, constraints, and required inputs.Apply relevant best practices and validate outcomes.Provide actionable steps and verification.If detailed examples are required, open resources/implementation-playbook.md.Execute a comprehensive Test-Driven Development (TDD) workflow with strict red-green-refactor discipline:
[Extended thinking: This workflow enforces test-first development through coordinated agent orchestration. Each phase of the TDD cycle is strictly enforced with fail-first verification, incremental implementation, and continuous refactoring. The workflow supports both single test and test suite approaches with configurable coverage thresholds.]
Configuration
Coverage Thresholds
Minimum line coverage: 80%Minimum branch coverage: 75%Critical path coverage: 100%Refactoring Triggers
Cyclomatic complexity > 10Method length > 20 linesClass length > 200 linesDuplicate code blocks > 3 linesPhase 1: Test Specification and Design
1. Requirements Analysis
Use Task tool with subagent_type="comprehensive-review::architect-review"Prompt: "Analyze requirements for: $ARGUMENTS. Define acceptance criteria, identify edge cases, and create test scenarios. Output a comprehensive test specification."Output: Test specification, acceptance criteria, edge case matrixValidation: Ensure all requirements have corresponding test scenarios2. Test Architecture Design
Use Task tool with subagent_type="unit-testing::test-automator"Prompt: "Design test architecture for: $ARGUMENTS based on test specification. Define test structure, fixtures, mocks, and test data strategy. Ensure testability and maintainability."Output: Test architecture, fixture design, mock strategyValidation: Architecture supports isolated, fast, reliable testsPhase 2: RED - Write Failing Tests
3. Write Unit Tests (Failing)
Use Task tool with subagent_type="unit-testing::test-automator"Prompt: "Write FAILING unit tests for: $ARGUMENTS. Tests must fail initially. Include edge cases, error scenarios, and happy paths. DO NOT implement production code."Output: Failing unit tests, test documentationCRITICAL: Verify all tests fail with expected error messages4. Verify Test Failure
Use Task tool with subagent_type="tdd-workflows::code-reviewer"Prompt: "Verify that all tests for: $ARGUMENTS are failing correctly. Ensure failures are for the right reasons (missing implementation, not test errors). Confirm no false positives."Output: Test failure verification reportGATE: Do not proceed until all tests fail appropriatelyPhase 3: GREEN - Make Tests Pass
5. Minimal Implementation
Use Task tool with subagent_type="backend-development::backend-architect"Prompt: "Implement MINIMAL code to make tests pass for: $ARGUMENTS. Focus only on making tests green. Do not add extra features or optimizations. Keep it simple."Output: Minimal working implementationConstraint: No code beyond what's needed to pass tests6. Verify Test Success
Use Task tool with subagent_type="unit-testing::test-automator"Prompt: "Run all tests for: $ARGUMENTS and verify they pass. Check test coverage metrics. Ensure no tests were accidentally broken."Output: Test execution report, coverage metricsGATE: All tests must pass before proceedingPhase 4: REFACTOR - Improve Code Quality
7. Code Refactoring
Use Task tool with subagent_type="tdd-workflows::code-reviewer"Prompt: "Refactor implementation for: $ARGUMENTS while keeping tests green. Apply SOLID principles, remove duplication, improve naming, and optimize performance. Run tests after each refactoring."Output: Refactored code, refactoring reportConstraint: Tests must remain green throughout8. Test Refactoring
Use Task tool with subagent_type="unit-testing::test-automator"Prompt: "Refactor tests for: $ARGUMENTS. Remove test duplication, improve test names, extract common fixtures, and enhance test readability. Ensure tests still provide same coverage."Output: Refactored tests, improved test structureValidation: Coverage metrics unchanged or improvedPhase 5: Integration and System Tests
9. Write Integration Tests (Failing First)
Use Task tool with subagent_type="unit-testing::test-automator"Prompt: "Write FAILING integration tests for: $ARGUMENTS. Test component interactions, API contracts, and data flow. Tests must fail initially."Output: Failing integration testsValidation: Tests fail due to missing integration logic10. Implement Integration
Use Task tool with subagent_type="backend-development::backend-architect"Prompt: "Implement integration code for: $ARGUMENTS to make integration tests pass. Focus on component interaction and data flow."Output: Integration implementationValidation: All integration tests passPhase 6: Continuous Improvement Cycle
11. Performance and Edge Case Tests
Use Task tool with subagent_type="unit-testing::test-automator"Prompt: "Add performance tests and additional edge case tests for: $ARGUMENTS. Include stress tests, boundary tests, and error recovery tests."Output: Extended test suiteMetric: Increased test coverage and scenario coverage12. Final Code Review
Use Task tool with subagent_type="comprehensive-review::architect-review"Prompt: "Perform comprehensive review of: $ARGUMENTS. Verify TDD process was followed, check code quality, test quality, and coverage. Suggest improvements."Output: Review report, improvement suggestionsAction: Implement critical suggestions while maintaining green testsIncremental Development Mode
For test-by-test development:
Write ONE failing testMake ONLY that test passRefactor if neededRepeat for next testUse this approach by adding --incremental flag to focus on one test at a time.
Test Suite Mode
For comprehensive test suite development:
Write ALL tests for a feature/module (failing)Implement code to pass ALL testsRefactor entire moduleAdd integration testsUse this approach by adding --suite flag for batch test development.
Validation Checkpoints
RED Phase Validation
[ ] All tests written before implementation[ ] All tests fail with meaningful error messages[ ] Test failures are due to missing implementation[ ] No test passes accidentallyGREEN Phase Validation
[ ] All tests pass[ ] No extra code beyond test requirements[ ] Coverage meets minimum thresholds[ ] No test was modified to make it passREFACTOR Phase Validation
[ ] All tests still pass after refactoring[ ] Code complexity reduced[ ] Duplication eliminated[ ] Performance improved or maintained[ ] Test readability improvedCoverage Reports
Generate coverage reports after each phase:
Line coverageBranch coverageFunction coverageStatement coverageFailure Recovery
If TDD discipline is broken:
STOP immediatelyIdentify which phase was violatedRollback to last valid stateResume from correct phaseDocument lesson learnedTDD Metrics Tracking
Track and report:
Time in each phase (Red/Green/Refactor)Number of test-implementation cyclesCoverage progressionRefactoring frequencyDefect escape rateAnti-Patterns to Avoid
Writing implementation before testsWriting tests that already passSkipping the refactor phaseWriting multiple features without testsModifying tests to make them passIgnoring failing testsWriting tests after implementationSuccess Criteria
100% of code written test-firstAll tests pass continuouslyCoverage exceeds thresholdsCode complexity within limitsZero defects in covered codeClear test documentationFast test execution (< 5 seconds for unit tests)Notes
Enforce strict RED-GREEN-REFACTOR disciplineEach phase must be completed before moving to nextTests are the specificationIf a test is hard to write, the design needs improvementRefactoring is NOT optionalKeep test execution fastTests should be independent and isolatedTDD implementation for: $ARGUMENTS