From 397b4845d67bf776503f642a9df1deb123576754 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Nikola=20Dere=C5=BEi=C4=8D?= Date: Sun, 14 Sep 2025 22:03:23 +0200 Subject: [PATCH] docs: add improved sprint framework based on Sprint 01 lessons learned MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Create enhanced versions of sprint framework files incorporating key improvements: **Sprint Playbook Template (Improved):** - Add comprehensive status tracking with current focus and complexity estimation - Enhance quality gates for code, testing, and documentation - Include proactive risk mitigation strategies with fallback approaches - Add lessons learned and retrospective sections for continuous improvement - Define clear communication protocols and success metrics **How-to-Use Guide (Improved):** - Implement advanced clarity checking to identify ambiguities before starting - Add comprehensive project analysis including testing infrastructure assessment - Enhance story breakdown with Given/When/Then format and dependency tracking - Include proactive risk management with mitigation strategies - Define quality gates for automated and manual verification - Add iterative improvement process for framework refinement **Implementation Guidelines (Improved):** - Add structured testing checkpoint protocol with user feedback formats - Implement iterative refinement process for handling user feedback - Enhance communication with proactive updates and blocker notifications - Add advanced error handling with classification and recovery protocols - Include knowledge transfer and technical decision documentation - Add continuous quality monitoring with automated checks These improvements generalize lessons from Sprint 01 successful execution: - Better user collaboration through structured testing checkpoints - Enhanced risk management with proactive identification and mitigation - Comprehensive quality assurance across multiple levels - Systematic knowledge capture and process optimization - Clear scope management and change control procedures ๐Ÿค– Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude --- ...o-use-sprint-playbook-template-improved.md | 363 ++++++++++++ ...rint-implementation-guidelines-improved.md | 524 ++++++++++++++++++ .../sprint-playbook-template-improved.md | 232 ++++++++ 3 files changed, 1119 insertions(+) create mode 100644 docs/framework/how-to-use-sprint-playbook-template-improved.md create mode 100644 docs/framework/sprint-implementation-guidelines-improved.md create mode 100644 docs/framework/sprint-playbook-template-improved.md diff --git a/docs/framework/how-to-use-sprint-playbook-template-improved.md b/docs/framework/how-to-use-sprint-playbook-template-improved.md new file mode 100644 index 0000000..076d669 --- /dev/null +++ b/docs/framework/how-to-use-sprint-playbook-template-improved.md @@ -0,0 +1,363 @@ +# ๐Ÿ“˜ How to Use the Sprint Playbook Template (Improved) + +*(Enhanced Guide for AI Agent)* + +--- + +## ๐ŸŽฏ Purpose + +This enhanced guide explains how to generate a **Sprint Playbook** from the **Improved Sprint Playbook Template**. +The Playbook serves as the **single source of truth** for sprint planning, execution, and tracking. + +**AI Agent Role:** +1. Analyze user requirements with enhanced clarity checking +2. Perform comprehensive project state assessment +3. Create granular, testable user stories with dependencies +4. Populate playbook with actionable technical guidance +5. Establish quality gates and risk mitigation strategies +6. Enable iterative refinement based on user feedback + +--- + +## ๐Ÿ›  Enhanced Step-by-Step Instructions + +### 1. Analyze User Input (Enhanced) + +**MANDATORY ANALYSIS PROCESS:** +1. **Extract Primary Goal**: What is the main business/technical objective? +2. **Identify Scope Boundaries**: What is explicitly included/excluded? +3. **Parse Technical Constraints**: Frameworks, patterns, performance requirements +4. **Assess Complexity Level**: Simple (1-2 days), Medium (3-5 days), Complex (1+ weeks) +5. **Identify Integration Points**: Other systems, APIs, data sources +6. **Check for Non-Functional Requirements**: Security, performance, accessibility + +**Enhanced Clarity Checking:** +If ANY of these conditions are met, STOP and ask for clarification: + +- **Vague Success Criteria**: "improve", "enhance", "make better" without metrics +- **Missing Technical Details**: Which specific framework version, API endpoints, data formats +- **Unclear User Experience**: No mention of who uses the feature or how +- **Ambiguous Scope**: Could be interpreted as either small change or major feature +- **Conflicting Requirements**: Performance vs. feature richness, security vs. usability +- **Missing Dependencies**: References to undefined components, services, or data + +**Structured Clarification Process:** +1. **Categorize Unknowns**: Technical, Business, User Experience, Integration +2. **Ask Specific Questions**: Provide 2-3 concrete options when possible +3. **Request Examples**: "Can you show me a similar feature you like?" +4. **Validate Understanding**: Repeat back interpretation for confirmation + +--- + +### 2. Assess Current State (Comprehensive) + +**MANDATORY DISCOVERY STEPS:** +1. **Read Core Application Files**: + - Entry points: `main.py`, `index.js`, `app.py`, `src/main.ts` + - Configuration: `.env`, `config.*`, `settings.py`, `next.config.js` + - Package management: `package.json`, `requirements.txt`, `Cargo.toml`, `pom.xml` + +2. **Analyze Architecture**: + - Directory structure and organization patterns + - Module boundaries and dependency flows + - Data models and database schemas + - API endpoints and routing patterns + +3. **Review Testing Infrastructure**: + - Test frameworks and utilities available + - Current test coverage and patterns + - CI/CD pipeline configuration + - Quality gates and automation + +4. **Assess Documentation**: + - README completeness and accuracy + - API documentation status + - Code comments and inline docs + - User guides or tutorials + +5. **Identify Technical Debt**: + - Known issues or TODOs + - Deprecated dependencies + - Performance bottlenecks + - Security vulnerabilities + +**Output Requirements:** +- **Factual Assessment**: No speculation, only observable facts +- **Relevance Filtering**: Focus on sprint-related components only +- **Version Specificity**: Include exact version numbers for all dependencies +- **Gap Identification**: Clearly note missing pieces needed for sprint success + +--- + +### 3. Sprint ID Selection (Enhanced) + +**EXACT ALGORITHM:** +1. **Check Directory Structure**: + ```bash + if docs/sprints/ exists: + scan for files matching "sprint-\d\d-*.md" + else: + create directory and use "01" + ``` + +2. **Extract and Validate IDs**: + - Use regex: `sprint-(\d\d)-.*\.md` + - Validate two-digit format with zero padding + - Sort numerically (not lexicographically) + +3. **Calculate Next ID**: + - Find maximum existing ID + - Increment by 1 + - Ensure zero-padding (sprintf "%02d" format) + +4. **Conflict Resolution**: + - If gaps exist (e.g., 01, 03, 05), use next in sequence (04) + - If no pattern found, default to "01" + - Never reuse IDs even if sprint was cancelled + +**Validation Checkpoint**: Double-check calculated ID against filesystem before proceeding. + +--- + +### 4. Define Desired State (Enhanced) + +**MANDATORY SECTIONS WITH EXAMPLES:** + +1. **New Features** (Specific): + ``` + โŒ Bad: "User authentication system" + โœ… Good: "JWT-based login with Google OAuth integration, session management, and role-based access control" + ``` + +2. **Modified Features** (Before/After): + ``` + โŒ Bad: "Improve dashboard performance" + โœ… Good: "Dashboard load time: 3.2s โ†’ <1.5s by implementing virtual scrolling and data pagination" + ``` + +3. **Expected Behavior Changes** (User-Visible): + ``` + โŒ Bad: "Better error handling" + โœ… Good: "Invalid form inputs show inline validation messages instead of generic alerts" + ``` + +4. **External Dependencies** (Version-Specific): + ``` + โŒ Bad: "Add React components" + โœ… Good: "Add @headlessui/react ^1.7.0 for accessible dropdown components" + ``` + +**Quality Checklist**: +- Each item is independently verifiable +- Success criteria are measurable +- Implementation approach is technically feasible +- Dependencies are explicitly listed with versions + +--- + +### 5. Break Down Into User Stories (Enhanced) + +**STORY GRANULARITY RULES:** +- **Maximum Implementation Time**: 4-6 hours of focused work +- **Independent Testability**: Each story can be tested in isolation +- **Single Responsibility**: One clear functional outcome per story +- **Atomic Commits**: Typically 1-2 commits per story maximum + +**ENHANCED STORY COMPONENTS:** + +1. **Story ID**: Sequential numbering with dependency tracking +2. **Title**: Action-oriented, 2-4 words (e.g., "Add Login Button", "Implement JWT Validation") +3. **Description**: Includes context, acceptance criteria preview, technical approach +4. **Acceptance Criteria**: 3-5 specific, testable conditions using Given/When/Then format +5. **Definition of Done**: Includes "user testing completed" for all user-facing changes +6. **Dependencies**: References to other stories that must complete first +7. **Estimated Time**: Realistic hour estimate for planning +8. **Risk Level**: Low/Medium/High based on technical complexity + +**ACCEPTANCE CRITERIA FORMAT:** +``` +Given [initial state] +When [user action or system trigger] +Then [expected outcome] +And [additional verifications] +``` + +**Example User Story:** +``` +| US-3 | Implement JWT Validation | Create middleware to validate JWT tokens on protected routes | Given a request to protected endpoint, When JWT is missing/invalid/expired, Then return 401 with specific error message, And log security event | Middleware implemented, unit tests added, error handling tested, security logging verified, **user testing completed** | AI-Agent | ๐Ÿ”ฒ todo | 3 hours | US-1, US-2 | +``` + +**DEPENDENCY MANAGEMENT:** +- Use topological sorting for story order +- Identify critical path and potential parallelization +- Plan for dependency failures or delays +- Consider story splitting to reduce dependencies + +--- + +### 6. Add Technical Instructions (Comprehensive) + +**REQUIRED SECTIONS WITH DEPTH:** + +1. **Code Snippets/Patterns** (Executable Examples): + ```typescript + // Actual working example following project conventions + export const authenticatedRoute = withAuth(async (req: AuthRequest, res: Response) => { + const { userId } = req.user; // from JWT middleware + const data = await fetchUserData(userId); + return res.json({ success: true, data }); + }); + ``` + +2. **Architecture Guidelines** (Specific Patterns): + - **Layer Separation**: "Controllers in `/api/`, business logic in `/services/`, data access in `/repositories/`" + - **Error Handling**: "Use Result pattern, never throw in async functions" + - **State Management**: "Use Zustand for client state, React Query for server state" + +3. **Coding Style Conventions** (Tool-Specific): + - **Naming**: "Components: PascalCase, functions: camelCase, constants: SCREAMING_SNAKE_CASE" + - **Files**: "Components end with `.tsx`, utilities with `.ts`, tests with `.test.ts`" + - **Imports**: "Absolute imports with `@/` alias, group by type (libraries, internal, relative)" + +4. **Testing Strategy** (Framework-Specific): + - **Unit Tests**: "Jest + Testing Library, minimum 80% coverage for business logic" + - **Integration**: "Supertest for API endpoints, test database with cleanup" + - **Manual Testing**: "User validates UI/UX after each user-facing story completion" + +5. **Quality Gates** (Automated Checks): + - **Build**: "`npm run build` must pass without errors" + - **Lint**: "`npm run lint:check` must pass with zero warnings" + - **Format**: "`npm run prettier:check` must pass" + - **Tests**: "`npm test` must pass with coverage thresholds met" + +--- + +### 7. Capture Risks and Dependencies (Proactive) + +**ENHANCED RISK ASSESSMENT:** + +1. **Technical Risks** (with Mitigation): + ``` + Risk: "JWT library might not support RS256 algorithm" + Impact: High (blocks authentication) + Probability: Low + Mitigation: "Research jsonwebtoken vs. jose libraries in discovery phase" + ``` + +2. **Integration Risks** (with Fallbacks): + ``` + Risk: "Google OAuth API rate limits during development" + Impact: Medium (slows testing) + Probability: Medium + Fallback: "Mock OAuth responses for local development" + ``` + +3. **Timeline Risks** (with Buffers): + ``` + Risk: "Database migration complexity unknown" + Impact: High (affects multiple stories) + Buffer: "Add 20% time buffer, prepare manual migration scripts" + ``` + +**DEPENDENCY TRACKING:** +- **Internal Dependencies**: Other modules, shared utilities, database changes +- **External Dependencies**: Third-party APIs, service availability, credentials +- **Human Dependencies**: Design assets, content, domain expertise +- **Infrastructure Dependencies**: Deployment access, environment variables, certificates + +--- + +### 8. Enhanced Quality Gates + +**CODE QUALITY GATES:** +- [ ] **Static Analysis**: ESLint, SonarQube, or equivalent passes +- [ ] **Security Scan**: No high/critical vulnerabilities in dependencies +- [ ] **Performance**: No significant regression in key metrics +- [ ] **Accessibility**: WCAG compliance for user-facing changes + +**TESTING QUALITY GATES:** +- [ ] **Coverage Threshold**: Minimum X% line coverage maintained +- [ ] **Test Quality**: No skipped tests, meaningful assertions +- [ ] **Edge Cases**: Error conditions and boundary values tested +- [ ] **User Acceptance**: Manual testing completed and approved + +**DOCUMENTATION QUALITY GATES:** +- [ ] **API Changes**: OpenAPI spec updated for backend changes +- [ ] **User Impact**: User-facing changes documented in appropriate format +- [ ] **Technical Decisions**: ADRs (Architecture Decision Records) updated for significant changes +- [ ] **Runbook Updates**: Deployment or operational procedures updated + +--- + +### 9. Advanced Definition of Done (DoD) + +**AI-RESPONSIBLE ITEMS** (Comprehensive): +* [ ] All user stories meet individual DoD criteria +* [ ] All quality gates passed (code, testing, documentation) +* [ ] Code compiles and passes all automated tests +* [ ] Security review completed (dependency scan, code analysis) +* [ ] Performance benchmarks meet requirements +* [ ] Accessibility standards met (if applicable) +* [ ] Code is committed with conventional commit messages +* [ ] Branch pushed with all commits and proper history +* [ ] Documentation updated and reviewed +* [ ] Sprint status updated to `โœ… completed` +* [ ] Technical debt documented for future sprints +* [ ] Lessons learned captured in playbook + +**USER-ONLY ITEMS** (Clear Boundaries): +* [ ] User acceptance testing completed successfully +* [ ] Cross-browser/device compatibility verified +* [ ] Production deployment completed and verified +* [ ] External system integrations tested end-to-end +* [ ] Performance validated in production environment +* [ ] Stakeholder sign-off received +* [ ] User training/documentation reviewed (if applicable) +* [ ] Monitoring and alerting configured (if applicable) + +--- + +## โš ๏ธ Enhanced Guardrails + +**SCOPE CREEP PREVENTION:** +- Document any "nice-to-have" features discovered during implementation +- Create follow-up sprint candidates for scope additions +- Maintain strict focus on original sprint goal +- Get explicit approval for any requirement changes + +**TECHNICAL DEBT MANAGEMENT:** +- Document shortcuts taken with rationale +- Estimate effort to address debt properly +- Prioritize debt that affects sprint success +- Plan debt paydown in future sprints + +**COMMUNICATION PROTOCOLS:** +- **Daily Updates**: Progress, blockers, timeline risks +- **Weekly Reviews**: Demo working features, gather feedback +- **Blocker Escalation**: Immediate notification with context and options +- **Scope Questions**: Present options with pros/cons, request decision + +--- + +## โœ… Enhanced Output Requirements + +**MANDATORY CHECKLIST** - Sprint Playbook must have: + +1. โœ… **Validated Sprint ID** - Following algorithmic ID selection +2. โœ… **Complete metadata** - All fields filled with realistic estimates +3. โœ… **Comprehensive state analysis** - Based on thorough code examination +4. โœ… **Measurable desired state** - Specific success criteria, not vague goals +5. โœ… **Granular user stories** - 4-6 hour implementation chunks with dependencies +6. โœ… **Executable acceptance criteria** - Given/When/Then format where appropriate +7. โœ… **Comprehensive DoD items** - Including user testing checkpoints +8. โœ… **Detailed technical guidance** - Working code examples and specific tools +9. โœ… **Proactive risk management** - Risks with mitigation strategies +10. โœ… **Quality gates defined** - Automated and manual verification steps +11. โœ… **Clear responsibility boundaries** - AI vs User tasks explicitly separated +12. โœ… **Communication protocols** - How and when to interact during sprint + +**FINAL VALIDATION**: The completed playbook should enable a different AI agent to execute the sprint successfully with minimal additional clarification. + +**ITERATIVE IMPROVEMENT**: After each sprint, capture lessons learned and refine the process for better efficiency and quality in future sprints. + +--- \ No newline at end of file diff --git a/docs/framework/sprint-implementation-guidelines-improved.md b/docs/framework/sprint-implementation-guidelines-improved.md new file mode 100644 index 0000000..40098d5 --- /dev/null +++ b/docs/framework/sprint-implementation-guidelines-improved.md @@ -0,0 +1,524 @@ +# ๐Ÿ“˜ Sprint Implementation Guidelines (Improved) + +These enhanced guidelines define how the AI agent must implement a Sprint based on the approved **Sprint Playbook**. +They ensure consistent execution, comprehensive testing, proactive risk management, and optimal user collaboration. + +--- + +## 0. Key Definitions (Enhanced) + +**Logical Unit of Work (LUW)**: A single, cohesive code change that: +- Implements one specific functionality or fixes one specific issue +- Can be described in 1-2 sentences with clear before/after behavior +- Passes all relevant tests and quality gates +- Can be committed independently without breaking the system +- Takes 30-90 minutes of focused implementation time + +**Testing Checkpoint**: A mandatory pause where AI provides specific test instructions to user: +- Clear step-by-step test procedures +- Expected behavior descriptions +- Pass/fail criteria +- What to look for (visual, functional, performance) +- How to report issues or failures + +**Blocked Status (`๐Ÿšซ blocked`)**: A user story cannot proceed due to: +- Missing external dependencies that cannot be mocked/simulated +- Conflicting requirements discovered during implementation +- Failed tests that require domain knowledge to resolve +- Technical limitations requiring architecture decisions +- User input needed for UX/business logic decisions + +**Iterative Refinement**: Process of making improvements based on user testing: +- User reports specific issues or improvement requests +- AI analyzes feedback and proposes solutions +- Implementation of fixes follows same LUW/commit pattern +- Re-testing until user confirms satisfaction + +--- + +## 1. Enhanced Git & Version Control Rules + +### 1.1 Commit Granularity (Refined) + +**MANDATORY COMMIT PATTERNS:** +* **Feature Implementation**: One LUW per commit +* **Test Addition**: Included in same commit as feature (not separate) +* **Documentation Updates**: Included in same commit as related code changes +* **Bug Fixes**: One fix per commit with clear reproduction steps in message +* **Refactoring**: Separate commits, no functional changes mixed in + +**COMMIT QUALITY CRITERIA:** +* Build passes after each commit +* Tests pass after each commit +* No temporary/debug code included +* All related files updated (tests, docs, types) + +### 1.2 Commit Message Style (Enhanced) + +**STRICT FORMAT:** +``` +(scope): + + + +