Use this skill when
Working on comprehensive review full review tasks or workflowsNeeding guidance, best practices, or checklists for comprehensive review full reviewDo not use this skill when
The task is unrelated to comprehensive review full reviewYou need a different domain or tool outside this scopeInstructions
Clarify goals, constraints, and required inputs.Apply relevant best practices and validate outcomes.Provide actionable steps and verification.If detailed examples are required, open resources/implementation-playbook.md.Orchestrate comprehensive multi-dimensional code review using specialized review agents
[Extended thinking: This workflow performs an exhaustive code review by orchestrating multiple specialized agents in sequential phases. Each phase builds upon previous findings to create a comprehensive review that covers code quality, security, performance, testing, documentation, and best practices. The workflow integrates modern AI-assisted review tools, static analysis, security scanning, and automated quality metrics. Results are consolidated into actionable feedback with clear prioritization and remediation guidance. The phased approach ensures thorough coverage while maintaining efficiency through parallel agent execution where appropriate.]
Review Configuration Options
--security-focus: Prioritize security vulnerabilities and OWASP compliance--performance-critical: Emphasize performance bottlenecks and scalability issues--tdd-review: Include TDD compliance and test-first verification--ai-assisted: Enable AI-powered review tools (Copilot, Codium, Bito)--strict-mode: Fail review on any critical issues found--metrics-report: Generate detailed quality metrics dashboard--framework [name]: Apply framework-specific best practices (React, Spring, Django, etc.)Phase 1: Code Quality & Architecture Review
Use Task tool to orchestrate quality and architecture agents in parallel:
1A. Code Quality Analysis
Use Task tool with subagent_type="code-reviewer"Prompt: "Perform comprehensive code quality review for: $ARGUMENTS. Analyze code complexity, maintainability index, technical debt, code duplication, naming conventions, and adherence to Clean Code principles. Integrate with SonarQube, CodeQL, and Semgrep for static analysis. Check for code smells, anti-patterns, and violations of SOLID principles. Generate cyclomatic complexity metrics and identify refactoring opportunities."Expected output: Quality metrics, code smell inventory, refactoring recommendationsContext: Initial codebase analysis, no dependencies on other phases1B. Architecture & Design Review
Use Task tool with subagent_type="architect-review"Prompt: "Review architectural design patterns and structural integrity in: $ARGUMENTS. Evaluate microservices boundaries, API design, database schema, dependency management, and adherence to Domain-Driven Design principles. Check for circular dependencies, inappropriate coupling, missing abstractions, and architectural drift. Verify compliance with enterprise architecture standards and cloud-native patterns."Expected output: Architecture assessment, design pattern analysis, structural recommendationsContext: Runs parallel with code quality analysisPhase 2: Security & Performance Review
Use Task tool with security and performance agents, incorporating Phase 1 findings:
2A. Security Vulnerability Assessment
Use Task tool with subagent_type="security-auditor"Prompt: "Execute comprehensive security audit on: $ARGUMENTS. Perform OWASP Top 10 analysis, dependency vulnerability scanning with Snyk/Trivy, secrets detection with GitLeaks, input validation review, authentication/authorization assessment, and cryptographic implementation review. Include findings from Phase 1 architecture review: {phase1_architecture_context}. Check for SQL injection, XSS, CSRF, insecure deserialization, and configuration security issues."Expected output: Vulnerability report, CVE list, security risk matrix, remediation stepsContext: Incorporates architectural vulnerabilities identified in Phase 1B2B. Performance & Scalability Analysis
Use Task tool with subagent_type="application-performance::performance-engineer"Prompt: "Conduct performance analysis and scalability assessment for: $ARGUMENTS. Profile code for CPU/memory hotspots, analyze database query performance, review caching strategies, identify N+1 problems, assess connection pooling, and evaluate asynchronous processing patterns. Consider architectural findings from Phase 1: {phase1_architecture_context}. Check for memory leaks, resource contention, and bottlenecks under load."Expected output: Performance metrics, bottleneck analysis, optimization recommendationsContext: Uses architecture insights to identify systemic performance issuesPhase 3: Testing & Documentation Review
Use Task tool for test and documentation quality assessment:
3A. Test Coverage & Quality Analysis
Use Task tool with subagent_type="unit-testing::test-automator"Prompt: "Evaluate testing strategy and implementation for: $ARGUMENTS. Analyze unit test coverage, integration test completeness, end-to-end test scenarios, test pyramid adherence, and test maintainability. Review test quality metrics including assertion density, test isolation, mock usage, and flakiness. Consider security and performance test requirements from Phase 2: {phase2_security_context}, {phase2_performance_context}. Verify TDD practices if --tdd-review flag is set."Expected output: Coverage report, test quality metrics, testing gap analysisContext: Incorporates security and performance testing requirements from Phase 23B. Documentation & API Specification Review
Use Task tool with subagent_type="code-documentation::docs-architect"Prompt: "Review documentation completeness and quality for: $ARGUMENTS. Assess inline code documentation, API documentation (OpenAPI/Swagger), architecture decision records (ADRs), README completeness, deployment guides, and runbooks. Verify documentation reflects actual implementation based on all previous phase findings: {phase1_context}, {phase2_context}. Check for outdated documentation, missing examples, and unclear explanations."Expected output: Documentation coverage report, inconsistency list, improvement recommendationsContext: Cross-references all previous findings to ensure documentation accuracyPhase 4: Best Practices & Standards Compliance
Use Task tool to verify framework-specific and industry best practices:
4A. Framework & Language Best Practices
Use Task tool with subagent_type="framework-migration::legacy-modernizer"Prompt: "Verify adherence to framework and language best practices for: $ARGUMENTS. Check modern JavaScript/TypeScript patterns, React hooks best practices, Python PEP compliance, Java enterprise patterns, Go idiomatic code, or framework-specific conventions (based on --framework flag). Review package management, build configuration, environment handling, and deployment practices. Include all quality issues from previous phases: {all_previous_contexts}."Expected output: Best practices compliance report, modernization recommendationsContext: Synthesizes all previous findings for framework-specific guidance4B. CI/CD & DevOps Practices Review
Use Task tool with subagent_type="cicd-automation::deployment-engineer"Prompt: "Review CI/CD pipeline and DevOps practices for: $ARGUMENTS. Evaluate build automation, test automation integration, deployment strategies (blue-green, canary), infrastructure as code, monitoring/observability setup, and incident response procedures. Assess pipeline security, artifact management, and rollback capabilities. Consider all issues identified in previous phases that impact deployment: {all_critical_issues}."Expected output: Pipeline assessment, DevOps maturity evaluation, automation recommendationsContext: Focuses on operationalizing fixes for all identified issuesConsolidated Report Generation
Compile all phase outputs into comprehensive review report:
Critical Issues (P0 - Must Fix Immediately)
Security vulnerabilities with CVSS > 7.0Data loss or corruption risksAuthentication/authorization bypassesProduction stability threatsCompliance violations (GDPR, PCI DSS, SOC2)High Priority (P1 - Fix Before Next Release)
Performance bottlenecks impacting user experienceMissing critical test coverageArchitectural anti-patterns causing technical debtOutdated dependencies with known vulnerabilitiesCode quality issues affecting maintainabilityMedium Priority (P2 - Plan for Next Sprint)
Non-critical performance optimizationsDocumentation gaps and inconsistenciesCode refactoring opportunitiesTest quality improvementsDevOps automation enhancementsLow Priority (P3 - Track in Backlog)
Style guide violationsMinor code smell issuesNice-to-have documentation updatesCosmetic improvementsSuccess Criteria
Review is considered successful when:
All critical security vulnerabilities are identified and documentedPerformance bottlenecks are profiled with remediation pathsTest coverage gaps are mapped with priority recommendationsArchitecture risks are assessed with mitigation strategiesDocumentation reflects actual implementation stateFramework best practices compliance is verifiedCI/CD pipeline supports safe deployment of reviewed codeClear, actionable feedback is provided for all findingsMetrics dashboard shows improvement trendsTeam has clear prioritized action plan for remediationTarget: $ARGUMENTS