Restructured the repository into a monorepo to better organize application code and maintenance scripts. ## Workspace Structure - web-app: Next.js application (all app code moved from root) - housekeeping: Database backup and maintenance scripts ## Key Changes - Moved all application code to web-app/ using git mv - Moved database scripts to housekeeping/ workspace - Updated Dockerfile for monorepo build process - Updated docker-compose files (volume paths: ./web-app/etc/hosts/) - Updated .gitignore for workspace-level node_modules - Updated documentation (README.md, CLAUDE.md, CHANGELOG.md) ## Migration Impact - Root package.json now manages workspaces - Build commands delegate to web-app workspace - All file history preserved via git mv - Docker build process updated for workspace structure 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
14 KiB
📘 How to Use the Sprint Playbook Template (Improved)
(Enhanced Guide for AI Agent)
🎯 Purpose
This enhanced guide explains how to generate a Sprint Playbook from the Improved Sprint Playbook Template. The Playbook serves as the single source of truth for sprint planning, execution, and tracking.
AI Agent Role:
- Analyze user requirements with enhanced clarity checking
- Perform comprehensive project state assessment
- Create granular, testable user stories with dependencies
- Populate playbook with actionable technical guidance
- Establish quality gates and risk mitigation strategies
- Enable iterative refinement based on user feedback
🛠 Enhanced Step-by-Step Instructions
1. Analyze User Input (Enhanced)
MANDATORY ANALYSIS PROCESS:
- Extract Primary Goal: What is the main business/technical objective?
- Identify Scope Boundaries: What is explicitly included/excluded?
- Parse Technical Constraints: Frameworks, patterns, performance requirements
- Assess Complexity Level: Simple (1-2 days), Medium (3-5 days), Complex (1+ weeks)
- Identify Integration Points: Other systems, APIs, data sources
- Check for Non-Functional Requirements: Security, performance, accessibility
Enhanced Clarity Checking: If ANY of these conditions are met, STOP and ask for clarification:
- Vague Success Criteria: "improve", "enhance", "make better" without metrics
- Missing Technical Details: Which specific framework version, API endpoints, data formats
- Unclear User Experience: No mention of who uses the feature or how
- Ambiguous Scope: Could be interpreted as either small change or major feature
- Conflicting Requirements: Performance vs. feature richness, security vs. usability
- Missing Dependencies: References to undefined components, services, or data
Structured Clarification Process:
- Categorize Unknowns: Technical, Business, User Experience, Integration
- Ask Specific Questions: Provide 2-3 concrete options when possible
- Request Examples: "Can you show me a similar feature you like?"
- Validate Understanding: Repeat back interpretation for confirmation
2. Assess Current State (Comprehensive)
MANDATORY DISCOVERY STEPS:
-
Read Core Application Files:
- Entry points:
main.py,index.js,app.py,src/main.ts - Configuration:
.env,config.*,settings.py,next.config.js - Package management:
package.json,requirements.txt,Cargo.toml,pom.xml
- Entry points:
-
Analyze Architecture:
- Directory structure and organization patterns
- Module boundaries and dependency flows
- Data models and database schemas
- API endpoints and routing patterns
-
Review Testing Infrastructure:
- Test frameworks and utilities available
- Current test coverage and patterns
- CI/CD pipeline configuration
- Quality gates and automation
-
Assess Documentation:
- README completeness and accuracy
- API documentation status
- Code comments and inline docs
- User guides or tutorials
-
Identify Technical Debt:
- Known issues or TODOs
- Deprecated dependencies
- Performance bottlenecks
- Security vulnerabilities
Output Requirements:
- Factual Assessment: No speculation, only observable facts
- Relevance Filtering: Focus on sprint-related components only
- Version Specificity: Include exact version numbers for all dependencies
- Gap Identification: Clearly note missing pieces needed for sprint success
3. Sprint ID Selection (Enhanced)
EXACT ALGORITHM:
-
Check Directory Structure:
if docs/sprints/ exists: scan for files matching "sprint-\d\d-*.md" else: create directory and use "01" -
Extract and Validate IDs:
- Use regex:
sprint-(\d\d)-.*\.md - Validate two-digit format with zero padding
- Sort numerically (not lexicographically)
- Use regex:
-
Calculate Next ID:
- Find maximum existing ID
- Increment by 1
- Ensure zero-padding (sprintf "%02d" format)
-
Conflict Resolution:
- If gaps exist (e.g., 01, 03, 05), use next in sequence (04)
- If no pattern found, default to "01"
- Never reuse IDs even if sprint was cancelled
Validation Checkpoint: Double-check calculated ID against filesystem before proceeding.
4. Define Desired State (Enhanced)
MANDATORY SECTIONS WITH EXAMPLES:
-
New Features (Specific):
❌ Bad: "User authentication system" ✅ Good: "JWT-based login with Google OAuth integration, session management, and role-based access control" -
Modified Features (Before/After):
❌ Bad: "Improve dashboard performance" ✅ Good: "Dashboard load time: 3.2s → <1.5s by implementing virtual scrolling and data pagination" -
Expected Behavior Changes (User-Visible):
❌ Bad: "Better error handling" ✅ Good: "Invalid form inputs show inline validation messages instead of generic alerts" -
External Dependencies (Version-Specific):
❌ Bad: "Add React components" ✅ Good: "Add @headlessui/react ^1.7.0 for accessible dropdown components"
Quality Checklist:
- Each item is independently verifiable
- Success criteria are measurable
- Implementation approach is technically feasible
- Dependencies are explicitly listed with versions
5. Break Down Into User Stories (Enhanced)
STORY GRANULARITY RULES:
- Maximum Implementation Time: 4-6 hours of focused work
- Independent Testability: Each story can be tested in isolation
- Single Responsibility: One clear functional outcome per story
- Atomic Commits: Typically 1-2 commits per story maximum
ENHANCED STORY COMPONENTS:
- Story ID: Sequential numbering with dependency tracking
- Title: Action-oriented, 2-4 words (e.g., "Add Login Button", "Implement JWT Validation")
- Description: Includes context, acceptance criteria preview, technical approach
- Acceptance Criteria: 3-5 specific, testable conditions using Given/When/Then format
- Definition of Done: Includes "user testing completed" for all user-facing changes
- Dependencies: References to other stories that must complete first
- Estimated Time: Realistic hour estimate for planning
- Risk Level: Low/Medium/High based on technical complexity
ACCEPTANCE CRITERIA FORMAT:
Given [initial state]
When [user action or system trigger]
Then [expected outcome]
And [additional verifications]
Example User Story:
| US-3 | Implement JWT Validation | Create middleware to validate JWT tokens on protected routes | Given a request to protected endpoint, When JWT is missing/invalid/expired, Then return 401 with specific error message, And log security event | Middleware implemented, unit tests added, error handling tested, security logging verified, **user testing completed** | AI-Agent | 🔲 todo | 3 hours | US-1, US-2 |
DEPENDENCY MANAGEMENT:
- Use topological sorting for story order
- Identify critical path and potential parallelization
- Plan for dependency failures or delays
- Consider story splitting to reduce dependencies
6. Add Technical Instructions (Comprehensive)
REQUIRED SECTIONS WITH DEPTH:
-
Code Snippets/Patterns (Executable Examples):
// Actual working example following project conventions export const authenticatedRoute = withAuth(async (req: AuthRequest, res: Response) => { const { userId } = req.user; // from JWT middleware const data = await fetchUserData(userId); return res.json({ success: true, data }); }); -
Architecture Guidelines (Specific Patterns):
- Layer Separation: "Controllers in
/api/, business logic in/services/, data access in/repositories/" - Error Handling: "Use Result<T, E> pattern, never throw in async functions"
- State Management: "Use Zustand for client state, React Query for server state"
- Layer Separation: "Controllers in
-
Coding Style Conventions (Tool-Specific):
- Naming: "Components: PascalCase, functions: camelCase, constants: SCREAMING_SNAKE_CASE"
- Files: "Components end with
.tsx, utilities with.ts, tests with.test.ts" - Imports: "Absolute imports with
@/alias, group by type (libraries, internal, relative)"
-
Testing Strategy (Framework-Specific):
- Unit Tests: "Jest + Testing Library, minimum 80% coverage for business logic"
- Integration: "Supertest for API endpoints, test database with cleanup"
- Manual Testing: "User validates UI/UX after each user-facing story completion"
-
Quality Gates (Automated Checks):
- Build: "
npm run buildmust pass without errors" - Lint: "
npm run lint:checkmust pass with zero warnings" - Format: "
npm run prettier:checkmust pass" - Tests: "
npm testmust pass with coverage thresholds met"
- Build: "
7. Capture Risks and Dependencies (Proactive)
ENHANCED RISK ASSESSMENT:
-
Technical Risks (with Mitigation):
Risk: "JWT library might not support RS256 algorithm" Impact: High (blocks authentication) Probability: Low Mitigation: "Research jsonwebtoken vs. jose libraries in discovery phase" -
Integration Risks (with Fallbacks):
Risk: "Google OAuth API rate limits during development" Impact: Medium (slows testing) Probability: Medium Fallback: "Mock OAuth responses for local development" -
Timeline Risks (with Buffers):
Risk: "Database migration complexity unknown" Impact: High (affects multiple stories) Buffer: "Add 20% time buffer, prepare manual migration scripts"
DEPENDENCY TRACKING:
- Internal Dependencies: Other modules, shared utilities, database changes
- External Dependencies: Third-party APIs, service availability, credentials
- Human Dependencies: Design assets, content, domain expertise
- Infrastructure Dependencies: Deployment access, environment variables, certificates
8. Enhanced Quality Gates
CODE QUALITY GATES:
- Static Analysis: ESLint, SonarQube, or equivalent passes
- Security Scan: No high/critical vulnerabilities in dependencies
- Performance: No significant regression in key metrics
- Accessibility: WCAG compliance for user-facing changes
TESTING QUALITY GATES:
- Coverage Threshold: Minimum X% line coverage maintained
- Test Quality: No skipped tests, meaningful assertions
- Edge Cases: Error conditions and boundary values tested
- User Acceptance: Manual testing completed and approved
DOCUMENTATION QUALITY GATES:
- API Changes: OpenAPI spec updated for backend changes
- User Impact: User-facing changes documented in appropriate format
- Technical Decisions: ADRs (Architecture Decision Records) updated for significant changes
- Runbook Updates: Deployment or operational procedures updated
9. Advanced Definition of Done (DoD)
AI-RESPONSIBLE ITEMS (Comprehensive):
- All user stories meet individual DoD criteria
- All quality gates passed (code, testing, documentation)
- Code compiles and passes all automated tests
- Security review completed (dependency scan, code analysis)
- Performance benchmarks meet requirements
- Accessibility standards met (if applicable)
- Code is committed with conventional commit messages
- Branch pushed with all commits and proper history
- Documentation updated and reviewed
- Sprint status updated to
✅ completed - Technical debt documented for future sprints
- Lessons learned captured in playbook
USER-ONLY ITEMS (Clear Boundaries):
- User acceptance testing completed successfully
- Cross-browser/device compatibility verified
- Production deployment completed and verified
- External system integrations tested end-to-end
- Performance validated in production environment
- Stakeholder sign-off received
- User training/documentation reviewed (if applicable)
- Monitoring and alerting configured (if applicable)
⚠️ Enhanced Guardrails
SCOPE CREEP PREVENTION:
- Document any "nice-to-have" features discovered during implementation
- Create follow-up sprint candidates for scope additions
- Maintain strict focus on original sprint goal
- Get explicit approval for any requirement changes
TECHNICAL DEBT MANAGEMENT:
- Document shortcuts taken with rationale
- Estimate effort to address debt properly
- Prioritize debt that affects sprint success
- Plan debt paydown in future sprints
COMMUNICATION PROTOCOLS:
- Daily Updates: Progress, blockers, timeline risks
- Weekly Reviews: Demo working features, gather feedback
- Blocker Escalation: Immediate notification with context and options
- Scope Questions: Present options with pros/cons, request decision
✅ Enhanced Output Requirements
MANDATORY CHECKLIST - Sprint Playbook must have:
- ✅ Validated Sprint ID - Following algorithmic ID selection
- ✅ Complete metadata - All fields filled with realistic estimates
- ✅ Comprehensive state analysis - Based on thorough code examination
- ✅ Measurable desired state - Specific success criteria, not vague goals
- ✅ Granular user stories - 4-6 hour implementation chunks with dependencies
- ✅ Executable acceptance criteria - Given/When/Then format where appropriate
- ✅ Comprehensive DoD items - Including user testing checkpoints
- ✅ Detailed technical guidance - Working code examples and specific tools
- ✅ Proactive risk management - Risks with mitigation strategies
- ✅ Quality gates defined - Automated and manual verification steps
- ✅ Clear responsibility boundaries - AI vs User tasks explicitly separated
- ✅ Communication protocols - How and when to interact during sprint
FINAL VALIDATION: The completed playbook should enable a different AI agent to execute the sprint successfully with minimal additional clarification.
ITERATIVE IMPROVEMENT: After each sprint, capture lessons learned and refine the process for better efficiency and quality in future sprints.