docs: add improved sprint framework based on Sprint 01 lessons learned
Create enhanced versions of sprint framework files incorporating key improvements: **Sprint Playbook Template (Improved):** - Add comprehensive status tracking with current focus and complexity estimation - Enhance quality gates for code, testing, and documentation - Include proactive risk mitigation strategies with fallback approaches - Add lessons learned and retrospective sections for continuous improvement - Define clear communication protocols and success metrics **How-to-Use Guide (Improved):** - Implement advanced clarity checking to identify ambiguities before starting - Add comprehensive project analysis including testing infrastructure assessment - Enhance story breakdown with Given/When/Then format and dependency tracking - Include proactive risk management with mitigation strategies - Define quality gates for automated and manual verification - Add iterative improvement process for framework refinement **Implementation Guidelines (Improved):** - Add structured testing checkpoint protocol with user feedback formats - Implement iterative refinement process for handling user feedback - Enhance communication with proactive updates and blocker notifications - Add advanced error handling with classification and recovery protocols - Include knowledge transfer and technical decision documentation - Add continuous quality monitoring with automated checks These improvements generalize lessons from Sprint 01 successful execution: - Better user collaboration through structured testing checkpoints - Enhanced risk management with proactive identification and mitigation - Comprehensive quality assurance across multiple levels - Systematic knowledge capture and process optimization - Clear scope management and change control procedures 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
363
docs/framework/how-to-use-sprint-playbook-template-improved.md
Normal file
363
docs/framework/how-to-use-sprint-playbook-template-improved.md
Normal file
@@ -0,0 +1,363 @@
|
||||
# 📘 How to Use the Sprint Playbook Template (Improved)
|
||||
|
||||
*(Enhanced Guide for AI Agent)*
|
||||
|
||||
---
|
||||
|
||||
## 🎯 Purpose
|
||||
|
||||
This enhanced guide explains how to generate a **Sprint Playbook** from the **Improved Sprint Playbook Template**.
|
||||
The Playbook serves as the **single source of truth** for sprint planning, execution, and tracking.
|
||||
|
||||
**AI Agent Role:**
|
||||
1. Analyze user requirements with enhanced clarity checking
|
||||
2. Perform comprehensive project state assessment
|
||||
3. Create granular, testable user stories with dependencies
|
||||
4. Populate playbook with actionable technical guidance
|
||||
5. Establish quality gates and risk mitigation strategies
|
||||
6. Enable iterative refinement based on user feedback
|
||||
|
||||
---
|
||||
|
||||
## 🛠 Enhanced Step-by-Step Instructions
|
||||
|
||||
### 1. Analyze User Input (Enhanced)
|
||||
|
||||
**MANDATORY ANALYSIS PROCESS:**
|
||||
1. **Extract Primary Goal**: What is the main business/technical objective?
|
||||
2. **Identify Scope Boundaries**: What is explicitly included/excluded?
|
||||
3. **Parse Technical Constraints**: Frameworks, patterns, performance requirements
|
||||
4. **Assess Complexity Level**: Simple (1-2 days), Medium (3-5 days), Complex (1+ weeks)
|
||||
5. **Identify Integration Points**: Other systems, APIs, data sources
|
||||
6. **Check for Non-Functional Requirements**: Security, performance, accessibility
|
||||
|
||||
**Enhanced Clarity Checking:**
|
||||
If ANY of these conditions are met, STOP and ask for clarification:
|
||||
|
||||
- **Vague Success Criteria**: "improve", "enhance", "make better" without metrics
|
||||
- **Missing Technical Details**: Which specific framework version, API endpoints, data formats
|
||||
- **Unclear User Experience**: No mention of who uses the feature or how
|
||||
- **Ambiguous Scope**: Could be interpreted as either small change or major feature
|
||||
- **Conflicting Requirements**: Performance vs. feature richness, security vs. usability
|
||||
- **Missing Dependencies**: References to undefined components, services, or data
|
||||
|
||||
**Structured Clarification Process:**
|
||||
1. **Categorize Unknowns**: Technical, Business, User Experience, Integration
|
||||
2. **Ask Specific Questions**: Provide 2-3 concrete options when possible
|
||||
3. **Request Examples**: "Can you show me a similar feature you like?"
|
||||
4. **Validate Understanding**: Repeat back interpretation for confirmation
|
||||
|
||||
---
|
||||
|
||||
### 2. Assess Current State (Comprehensive)
|
||||
|
||||
**MANDATORY DISCOVERY STEPS:**
|
||||
1. **Read Core Application Files**:
|
||||
- Entry points: `main.py`, `index.js`, `app.py`, `src/main.ts`
|
||||
- Configuration: `.env`, `config.*`, `settings.py`, `next.config.js`
|
||||
- Package management: `package.json`, `requirements.txt`, `Cargo.toml`, `pom.xml`
|
||||
|
||||
2. **Analyze Architecture**:
|
||||
- Directory structure and organization patterns
|
||||
- Module boundaries and dependency flows
|
||||
- Data models and database schemas
|
||||
- API endpoints and routing patterns
|
||||
|
||||
3. **Review Testing Infrastructure**:
|
||||
- Test frameworks and utilities available
|
||||
- Current test coverage and patterns
|
||||
- CI/CD pipeline configuration
|
||||
- Quality gates and automation
|
||||
|
||||
4. **Assess Documentation**:
|
||||
- README completeness and accuracy
|
||||
- API documentation status
|
||||
- Code comments and inline docs
|
||||
- User guides or tutorials
|
||||
|
||||
5. **Identify Technical Debt**:
|
||||
- Known issues or TODOs
|
||||
- Deprecated dependencies
|
||||
- Performance bottlenecks
|
||||
- Security vulnerabilities
|
||||
|
||||
**Output Requirements:**
|
||||
- **Factual Assessment**: No speculation, only observable facts
|
||||
- **Relevance Filtering**: Focus on sprint-related components only
|
||||
- **Version Specificity**: Include exact version numbers for all dependencies
|
||||
- **Gap Identification**: Clearly note missing pieces needed for sprint success
|
||||
|
||||
---
|
||||
|
||||
### 3. Sprint ID Selection (Enhanced)
|
||||
|
||||
**EXACT ALGORITHM:**
|
||||
1. **Check Directory Structure**:
|
||||
```bash
|
||||
if docs/sprints/ exists:
|
||||
scan for files matching "sprint-\d\d-*.md"
|
||||
else:
|
||||
create directory and use "01"
|
||||
```
|
||||
|
||||
2. **Extract and Validate IDs**:
|
||||
- Use regex: `sprint-(\d\d)-.*\.md`
|
||||
- Validate two-digit format with zero padding
|
||||
- Sort numerically (not lexicographically)
|
||||
|
||||
3. **Calculate Next ID**:
|
||||
- Find maximum existing ID
|
||||
- Increment by 1
|
||||
- Ensure zero-padding (sprintf "%02d" format)
|
||||
|
||||
4. **Conflict Resolution**:
|
||||
- If gaps exist (e.g., 01, 03, 05), use next in sequence (04)
|
||||
- If no pattern found, default to "01"
|
||||
- Never reuse IDs even if sprint was cancelled
|
||||
|
||||
**Validation Checkpoint**: Double-check calculated ID against filesystem before proceeding.
|
||||
|
||||
---
|
||||
|
||||
### 4. Define Desired State (Enhanced)
|
||||
|
||||
**MANDATORY SECTIONS WITH EXAMPLES:**
|
||||
|
||||
1. **New Features** (Specific):
|
||||
```
|
||||
❌ Bad: "User authentication system"
|
||||
✅ Good: "JWT-based login with Google OAuth integration, session management, and role-based access control"
|
||||
```
|
||||
|
||||
2. **Modified Features** (Before/After):
|
||||
```
|
||||
❌ Bad: "Improve dashboard performance"
|
||||
✅ Good: "Dashboard load time: 3.2s → <1.5s by implementing virtual scrolling and data pagination"
|
||||
```
|
||||
|
||||
3. **Expected Behavior Changes** (User-Visible):
|
||||
```
|
||||
❌ Bad: "Better error handling"
|
||||
✅ Good: "Invalid form inputs show inline validation messages instead of generic alerts"
|
||||
```
|
||||
|
||||
4. **External Dependencies** (Version-Specific):
|
||||
```
|
||||
❌ Bad: "Add React components"
|
||||
✅ Good: "Add @headlessui/react ^1.7.0 for accessible dropdown components"
|
||||
```
|
||||
|
||||
**Quality Checklist**:
|
||||
- Each item is independently verifiable
|
||||
- Success criteria are measurable
|
||||
- Implementation approach is technically feasible
|
||||
- Dependencies are explicitly listed with versions
|
||||
|
||||
---
|
||||
|
||||
### 5. Break Down Into User Stories (Enhanced)
|
||||
|
||||
**STORY GRANULARITY RULES:**
|
||||
- **Maximum Implementation Time**: 4-6 hours of focused work
|
||||
- **Independent Testability**: Each story can be tested in isolation
|
||||
- **Single Responsibility**: One clear functional outcome per story
|
||||
- **Atomic Commits**: Typically 1-2 commits per story maximum
|
||||
|
||||
**ENHANCED STORY COMPONENTS:**
|
||||
|
||||
1. **Story ID**: Sequential numbering with dependency tracking
|
||||
2. **Title**: Action-oriented, 2-4 words (e.g., "Add Login Button", "Implement JWT Validation")
|
||||
3. **Description**: Includes context, acceptance criteria preview, technical approach
|
||||
4. **Acceptance Criteria**: 3-5 specific, testable conditions using Given/When/Then format
|
||||
5. **Definition of Done**: Includes "user testing completed" for all user-facing changes
|
||||
6. **Dependencies**: References to other stories that must complete first
|
||||
7. **Estimated Time**: Realistic hour estimate for planning
|
||||
8. **Risk Level**: Low/Medium/High based on technical complexity
|
||||
|
||||
**ACCEPTANCE CRITERIA FORMAT:**
|
||||
```
|
||||
Given [initial state]
|
||||
When [user action or system trigger]
|
||||
Then [expected outcome]
|
||||
And [additional verifications]
|
||||
```
|
||||
|
||||
**Example User Story:**
|
||||
```
|
||||
| US-3 | Implement JWT Validation | Create middleware to validate JWT tokens on protected routes | Given a request to protected endpoint, When JWT is missing/invalid/expired, Then return 401 with specific error message, And log security event | Middleware implemented, unit tests added, error handling tested, security logging verified, **user testing completed** | AI-Agent | 🔲 todo | 3 hours | US-1, US-2 |
|
||||
```
|
||||
|
||||
**DEPENDENCY MANAGEMENT:**
|
||||
- Use topological sorting for story order
|
||||
- Identify critical path and potential parallelization
|
||||
- Plan for dependency failures or delays
|
||||
- Consider story splitting to reduce dependencies
|
||||
|
||||
---
|
||||
|
||||
### 6. Add Technical Instructions (Comprehensive)
|
||||
|
||||
**REQUIRED SECTIONS WITH DEPTH:**
|
||||
|
||||
1. **Code Snippets/Patterns** (Executable Examples):
|
||||
```typescript
|
||||
// Actual working example following project conventions
|
||||
export const authenticatedRoute = withAuth(async (req: AuthRequest, res: Response) => {
|
||||
const { userId } = req.user; // from JWT middleware
|
||||
const data = await fetchUserData(userId);
|
||||
return res.json({ success: true, data });
|
||||
});
|
||||
```
|
||||
|
||||
2. **Architecture Guidelines** (Specific Patterns):
|
||||
- **Layer Separation**: "Controllers in `/api/`, business logic in `/services/`, data access in `/repositories/`"
|
||||
- **Error Handling**: "Use Result<T, E> pattern, never throw in async functions"
|
||||
- **State Management**: "Use Zustand for client state, React Query for server state"
|
||||
|
||||
3. **Coding Style Conventions** (Tool-Specific):
|
||||
- **Naming**: "Components: PascalCase, functions: camelCase, constants: SCREAMING_SNAKE_CASE"
|
||||
- **Files**: "Components end with `.tsx`, utilities with `.ts`, tests with `.test.ts`"
|
||||
- **Imports**: "Absolute imports with `@/` alias, group by type (libraries, internal, relative)"
|
||||
|
||||
4. **Testing Strategy** (Framework-Specific):
|
||||
- **Unit Tests**: "Jest + Testing Library, minimum 80% coverage for business logic"
|
||||
- **Integration**: "Supertest for API endpoints, test database with cleanup"
|
||||
- **Manual Testing**: "User validates UI/UX after each user-facing story completion"
|
||||
|
||||
5. **Quality Gates** (Automated Checks):
|
||||
- **Build**: "`npm run build` must pass without errors"
|
||||
- **Lint**: "`npm run lint:check` must pass with zero warnings"
|
||||
- **Format**: "`npm run prettier:check` must pass"
|
||||
- **Tests**: "`npm test` must pass with coverage thresholds met"
|
||||
|
||||
---
|
||||
|
||||
### 7. Capture Risks and Dependencies (Proactive)
|
||||
|
||||
**ENHANCED RISK ASSESSMENT:**
|
||||
|
||||
1. **Technical Risks** (with Mitigation):
|
||||
```
|
||||
Risk: "JWT library might not support RS256 algorithm"
|
||||
Impact: High (blocks authentication)
|
||||
Probability: Low
|
||||
Mitigation: "Research jsonwebtoken vs. jose libraries in discovery phase"
|
||||
```
|
||||
|
||||
2. **Integration Risks** (with Fallbacks):
|
||||
```
|
||||
Risk: "Google OAuth API rate limits during development"
|
||||
Impact: Medium (slows testing)
|
||||
Probability: Medium
|
||||
Fallback: "Mock OAuth responses for local development"
|
||||
```
|
||||
|
||||
3. **Timeline Risks** (with Buffers):
|
||||
```
|
||||
Risk: "Database migration complexity unknown"
|
||||
Impact: High (affects multiple stories)
|
||||
Buffer: "Add 20% time buffer, prepare manual migration scripts"
|
||||
```
|
||||
|
||||
**DEPENDENCY TRACKING:**
|
||||
- **Internal Dependencies**: Other modules, shared utilities, database changes
|
||||
- **External Dependencies**: Third-party APIs, service availability, credentials
|
||||
- **Human Dependencies**: Design assets, content, domain expertise
|
||||
- **Infrastructure Dependencies**: Deployment access, environment variables, certificates
|
||||
|
||||
---
|
||||
|
||||
### 8. Enhanced Quality Gates
|
||||
|
||||
**CODE QUALITY GATES:**
|
||||
- [ ] **Static Analysis**: ESLint, SonarQube, or equivalent passes
|
||||
- [ ] **Security Scan**: No high/critical vulnerabilities in dependencies
|
||||
- [ ] **Performance**: No significant regression in key metrics
|
||||
- [ ] **Accessibility**: WCAG compliance for user-facing changes
|
||||
|
||||
**TESTING QUALITY GATES:**
|
||||
- [ ] **Coverage Threshold**: Minimum X% line coverage maintained
|
||||
- [ ] **Test Quality**: No skipped tests, meaningful assertions
|
||||
- [ ] **Edge Cases**: Error conditions and boundary values tested
|
||||
- [ ] **User Acceptance**: Manual testing completed and approved
|
||||
|
||||
**DOCUMENTATION QUALITY GATES:**
|
||||
- [ ] **API Changes**: OpenAPI spec updated for backend changes
|
||||
- [ ] **User Impact**: User-facing changes documented in appropriate format
|
||||
- [ ] **Technical Decisions**: ADRs (Architecture Decision Records) updated for significant changes
|
||||
- [ ] **Runbook Updates**: Deployment or operational procedures updated
|
||||
|
||||
---
|
||||
|
||||
### 9. Advanced Definition of Done (DoD)
|
||||
|
||||
**AI-RESPONSIBLE ITEMS** (Comprehensive):
|
||||
* [ ] All user stories meet individual DoD criteria
|
||||
* [ ] All quality gates passed (code, testing, documentation)
|
||||
* [ ] Code compiles and passes all automated tests
|
||||
* [ ] Security review completed (dependency scan, code analysis)
|
||||
* [ ] Performance benchmarks meet requirements
|
||||
* [ ] Accessibility standards met (if applicable)
|
||||
* [ ] Code is committed with conventional commit messages
|
||||
* [ ] Branch pushed with all commits and proper history
|
||||
* [ ] Documentation updated and reviewed
|
||||
* [ ] Sprint status updated to `✅ completed`
|
||||
* [ ] Technical debt documented for future sprints
|
||||
* [ ] Lessons learned captured in playbook
|
||||
|
||||
**USER-ONLY ITEMS** (Clear Boundaries):
|
||||
* [ ] User acceptance testing completed successfully
|
||||
* [ ] Cross-browser/device compatibility verified
|
||||
* [ ] Production deployment completed and verified
|
||||
* [ ] External system integrations tested end-to-end
|
||||
* [ ] Performance validated in production environment
|
||||
* [ ] Stakeholder sign-off received
|
||||
* [ ] User training/documentation reviewed (if applicable)
|
||||
* [ ] Monitoring and alerting configured (if applicable)
|
||||
|
||||
---
|
||||
|
||||
## ⚠️ Enhanced Guardrails
|
||||
|
||||
**SCOPE CREEP PREVENTION:**
|
||||
- Document any "nice-to-have" features discovered during implementation
|
||||
- Create follow-up sprint candidates for scope additions
|
||||
- Maintain strict focus on original sprint goal
|
||||
- Get explicit approval for any requirement changes
|
||||
|
||||
**TECHNICAL DEBT MANAGEMENT:**
|
||||
- Document shortcuts taken with rationale
|
||||
- Estimate effort to address debt properly
|
||||
- Prioritize debt that affects sprint success
|
||||
- Plan debt paydown in future sprints
|
||||
|
||||
**COMMUNICATION PROTOCOLS:**
|
||||
- **Daily Updates**: Progress, blockers, timeline risks
|
||||
- **Weekly Reviews**: Demo working features, gather feedback
|
||||
- **Blocker Escalation**: Immediate notification with context and options
|
||||
- **Scope Questions**: Present options with pros/cons, request decision
|
||||
|
||||
---
|
||||
|
||||
## ✅ Enhanced Output Requirements
|
||||
|
||||
**MANDATORY CHECKLIST** - Sprint Playbook must have:
|
||||
|
||||
1. ✅ **Validated Sprint ID** - Following algorithmic ID selection
|
||||
2. ✅ **Complete metadata** - All fields filled with realistic estimates
|
||||
3. ✅ **Comprehensive state analysis** - Based on thorough code examination
|
||||
4. ✅ **Measurable desired state** - Specific success criteria, not vague goals
|
||||
5. ✅ **Granular user stories** - 4-6 hour implementation chunks with dependencies
|
||||
6. ✅ **Executable acceptance criteria** - Given/When/Then format where appropriate
|
||||
7. ✅ **Comprehensive DoD items** - Including user testing checkpoints
|
||||
8. ✅ **Detailed technical guidance** - Working code examples and specific tools
|
||||
9. ✅ **Proactive risk management** - Risks with mitigation strategies
|
||||
10. ✅ **Quality gates defined** - Automated and manual verification steps
|
||||
11. ✅ **Clear responsibility boundaries** - AI vs User tasks explicitly separated
|
||||
12. ✅ **Communication protocols** - How and when to interact during sprint
|
||||
|
||||
**FINAL VALIDATION**: The completed playbook should enable a different AI agent to execute the sprint successfully with minimal additional clarification.
|
||||
|
||||
**ITERATIVE IMPROVEMENT**: After each sprint, capture lessons learned and refine the process for better efficiency and quality in future sprints.
|
||||
|
||||
---
|
||||
524
docs/framework/sprint-implementation-guidelines-improved.md
Normal file
524
docs/framework/sprint-implementation-guidelines-improved.md
Normal file
@@ -0,0 +1,524 @@
|
||||
# 📘 Sprint Implementation Guidelines (Improved)
|
||||
|
||||
These enhanced guidelines define how the AI agent must implement a Sprint based on the approved **Sprint Playbook**.
|
||||
They ensure consistent execution, comprehensive testing, proactive risk management, and optimal user collaboration.
|
||||
|
||||
---
|
||||
|
||||
## 0. Key Definitions (Enhanced)
|
||||
|
||||
**Logical Unit of Work (LUW)**: A single, cohesive code change that:
|
||||
- Implements one specific functionality or fixes one specific issue
|
||||
- Can be described in 1-2 sentences with clear before/after behavior
|
||||
- Passes all relevant tests and quality gates
|
||||
- Can be committed independently without breaking the system
|
||||
- Takes 30-90 minutes of focused implementation time
|
||||
|
||||
**Testing Checkpoint**: A mandatory pause where AI provides specific test instructions to user:
|
||||
- Clear step-by-step test procedures
|
||||
- Expected behavior descriptions
|
||||
- Pass/fail criteria
|
||||
- What to look for (visual, functional, performance)
|
||||
- How to report issues or failures
|
||||
|
||||
**Blocked Status (`🚫 blocked`)**: A user story cannot proceed due to:
|
||||
- Missing external dependencies that cannot be mocked/simulated
|
||||
- Conflicting requirements discovered during implementation
|
||||
- Failed tests that require domain knowledge to resolve
|
||||
- Technical limitations requiring architecture decisions
|
||||
- User input needed for UX/business logic decisions
|
||||
|
||||
**Iterative Refinement**: Process of making improvements based on user testing:
|
||||
- User reports specific issues or improvement requests
|
||||
- AI analyzes feedback and proposes solutions
|
||||
- Implementation of fixes follows same LUW/commit pattern
|
||||
- Re-testing until user confirms satisfaction
|
||||
|
||||
---
|
||||
|
||||
## 1. Enhanced Git & Version Control Rules
|
||||
|
||||
### 1.1 Commit Granularity (Refined)
|
||||
|
||||
**MANDATORY COMMIT PATTERNS:**
|
||||
* **Feature Implementation**: One LUW per commit
|
||||
* **Test Addition**: Included in same commit as feature (not separate)
|
||||
* **Documentation Updates**: Included in same commit as related code changes
|
||||
* **Bug Fixes**: One fix per commit with clear reproduction steps in message
|
||||
* **Refactoring**: Separate commits, no functional changes mixed in
|
||||
|
||||
**COMMIT QUALITY CRITERIA:**
|
||||
* Build passes after each commit
|
||||
* Tests pass after each commit
|
||||
* No temporary/debug code included
|
||||
* All related files updated (tests, docs, types)
|
||||
|
||||
### 1.2 Commit Message Style (Enhanced)
|
||||
|
||||
**STRICT FORMAT:**
|
||||
```
|
||||
<type>(scope): <subject>
|
||||
|
||||
<body>
|
||||
|
||||
<footer>
|
||||
```
|
||||
|
||||
**ENHANCED TYPES:**
|
||||
* `feat`: New feature implementation
|
||||
* `fix`: Bug fix or issue resolution
|
||||
* `refactor`: Code restructuring without behavior change
|
||||
* `perf`: Performance improvement
|
||||
* `test`: Test addition or modification
|
||||
* `docs`: Documentation updates
|
||||
* `style`: Code formatting (no logic changes)
|
||||
* `chore`: Maintenance tasks (dependency updates, build changes)
|
||||
* `i18n`: Internationalization changes
|
||||
* `a11y`: Accessibility improvements
|
||||
|
||||
**IMPROVED EXAMPLES:**
|
||||
```
|
||||
feat(auth): implement JWT token validation middleware
|
||||
|
||||
Add Express middleware for JWT token verification:
|
||||
- Validates token signature using HS256 algorithm
|
||||
- Extracts user ID and role from payload
|
||||
- Returns 401 for missing/invalid/expired tokens
|
||||
- Adds user context to request object for downstream handlers
|
||||
|
||||
Refs: US-3
|
||||
Tests: Added unit tests for middleware edge cases
|
||||
```
|
||||
|
||||
### 1.3 Branch Management (Enhanced)
|
||||
|
||||
**BRANCH NAMING CONVENTION:**
|
||||
```
|
||||
feature/sprint-<id>-<goal-slug>
|
||||
```
|
||||
Examples:
|
||||
- `feature/sprint-01-barcode-print`
|
||||
- `feature/sprint-02-user-auth`
|
||||
- `feature/sprint-03-api-optimization`
|
||||
|
||||
**BRANCH LIFECYCLE:**
|
||||
1. **Creation**: Branch from latest `main` with clean history
|
||||
2. **Development**: Regular commits following LUW pattern
|
||||
3. **Integration**: Rebase against `main` before completion
|
||||
4. **Handover**: All tests passing, documentation complete
|
||||
5. **Merge**: User responsibility, AI never merges
|
||||
|
||||
### 1.4 Advanced Commit Strategies
|
||||
|
||||
**RELATED CHANGE GROUPING:**
|
||||
```
|
||||
# Single commit for cohesive changes:
|
||||
feat(print): add barcode table with print optimization
|
||||
- Implement 3-column table layout for barcode display
|
||||
- Add CSS print media queries for A4 paper
|
||||
- Include responsive design for screen viewing
|
||||
- Add comprehensive i18n support (EN/HR)
|
||||
|
||||
Refs: US-3, US-4
|
||||
```
|
||||
|
||||
**DEPENDENCY TRACKING IN COMMITS:**
|
||||
```
|
||||
feat(auth): implement user session management
|
||||
|
||||
Builds upon JWT middleware from previous commit.
|
||||
Requires database migration for session storage.
|
||||
|
||||
Refs: US-5
|
||||
Depends-On: US-3 (JWT middleware)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 2. Enhanced Testing & Quality Assurance
|
||||
|
||||
### 2.1 Multi-Level Testing Strategy
|
||||
|
||||
**AUTOMATED TESTING (AI Responsibility):**
|
||||
* **Unit Tests**: Individual functions, components, utilities
|
||||
* **Integration Tests**: API endpoints, database operations
|
||||
* **Build Tests**: Compilation, bundling, linting
|
||||
* **Security Tests**: Dependency vulnerability scanning
|
||||
|
||||
**MANUAL TESTING (User Responsibility with AI Guidance):**
|
||||
* **Functional Testing**: Feature behavior verification
|
||||
* **UI/UX Testing**: Visual appearance and user interaction
|
||||
* **Cross-browser Testing**: Compatibility verification
|
||||
* **Performance Testing**: Load times, responsiveness
|
||||
* **Accessibility Testing**: Screen reader, keyboard navigation
|
||||
|
||||
### 2.2 Testing Checkpoint Protocol
|
||||
|
||||
**WHEN TO TRIGGER TESTING CHECKPOINTS:**
|
||||
1. After each user-facing user story completion
|
||||
2. After significant technical changes that affect user experience
|
||||
3. Before major integrations or API changes
|
||||
4. When user reports issues requiring iteration
|
||||
|
||||
**TESTING INSTRUCTION FORMAT:**
|
||||
```markdown
|
||||
## 🧪 USER TESTING CHECKPOINT - [Story ID]
|
||||
|
||||
**[Story Title]** is implemented and ready for testing.
|
||||
|
||||
### **Test Steps:**
|
||||
1. [Specific action]: Navigate to X, click Y, enter Z
|
||||
2. [Verification step]: Check that A appears, B behaves correctly
|
||||
3. [Edge case]: Try invalid input W, verify error message
|
||||
|
||||
### **Expected Behavior:**
|
||||
- ✅ [Specific outcome]: X should display Y with Z properties
|
||||
- ✅ [Performance expectation]: Page should load in under 2 seconds
|
||||
- ✅ [Error handling]: Invalid inputs show appropriate messages
|
||||
|
||||
### **What to Test:**
|
||||
1. **[Category]**: [Specific items to verify]
|
||||
2. **[Category]**: [Specific items to verify]
|
||||
|
||||
### **Please report:**
|
||||
- ✅ **PASS** if all expected behaviors work correctly
|
||||
- ❌ **FAIL** with specific details about what's wrong
|
||||
|
||||
**Expected Result**: [Summary of successful test outcome]
|
||||
```
|
||||
|
||||
### 2.3 Issue Resolution Workflow
|
||||
|
||||
**USER ISSUE REPORTING FORMAT:**
|
||||
```
|
||||
❌ **FAIL**: [Brief description]
|
||||
|
||||
**Issue Details:**
|
||||
- [Specific problem observed]
|
||||
- [Steps that led to issue]
|
||||
- [Expected vs actual behavior]
|
||||
- [Browser/device info if relevant]
|
||||
- [Error messages if any]
|
||||
|
||||
**Priority**: [High/Medium/Low]
|
||||
```
|
||||
|
||||
**AI RESPONSE PROTOCOL:**
|
||||
1. **Acknowledge**: Confirm understanding of reported issue
|
||||
2. **Analyze**: Identify root cause and scope of fix needed
|
||||
3. **Propose**: Suggest solution approach for user approval
|
||||
4. **Implement**: Make fixes following LUW pattern
|
||||
5. **Re-test**: Request focused testing on fix area
|
||||
|
||||
---
|
||||
|
||||
## 3. Enhanced Status Management & Tracking
|
||||
|
||||
### 3.1 Real-Time Status Updates
|
||||
|
||||
**STORY STATUS TRANSITIONS:**
|
||||
```
|
||||
🔲 todo → 🚧 in progress → ✅ done
|
||||
↓
|
||||
🚫 blocked (requires user intervention)
|
||||
```
|
||||
|
||||
**SPRINT STATUS GRANULARITY:**
|
||||
```
|
||||
🔲 not started → 🚧 in progress → 🛠️ implementing US-X → ✅ completed
|
||||
↓
|
||||
🚫 blocked on US-X
|
||||
```
|
||||
|
||||
**STATUS UPDATE TIMING:**
|
||||
* **Story Start**: Update status in first commit containing story work
|
||||
* **Story Completion**: Update status in final commit for story
|
||||
* **Blocker Discovery**: Update immediately when blocker identified
|
||||
* **Sprint Completion**: Update after all stories completed and tested
|
||||
|
||||
### 3.2 Progress Tracking Enhancement
|
||||
|
||||
**COMMIT-LEVEL TRACKING:**
|
||||
```
|
||||
feat(auth): implement login form validation
|
||||
|
||||
Progress: US-2 implementation started
|
||||
Completed: Form layout, basic validation rules
|
||||
Remaining: Error handling, success flow, tests
|
||||
|
||||
Refs: US-2 (40% complete)
|
||||
```
|
||||
|
||||
**STORY DEPENDENCY TRACKING:**
|
||||
* Update playbook to show which dependencies are complete
|
||||
* Identify critical path blockers early
|
||||
* Communicate timeline impacts of dependency delays
|
||||
|
||||
### 3.3 Sprint Completion Criteria (Enhanced)
|
||||
|
||||
**COMPREHENSIVE COMPLETION CHECKLIST:**
|
||||
|
||||
**Technical Completion:**
|
||||
* [ ] All code committed to sprint branch with clean history
|
||||
* [ ] Build passes without errors or warnings
|
||||
* [ ] All automated tests pass with required coverage
|
||||
* [ ] Security scan shows no critical vulnerabilities
|
||||
* [ ] Performance benchmarks meet requirements
|
||||
* [ ] Documentation updated and accurate
|
||||
|
||||
**Quality Assurance Completion:**
|
||||
* [ ] All user stories tested and approved by user
|
||||
* [ ] No critical bugs or usability issues remain
|
||||
* [ ] Cross-browser compatibility verified (if applicable)
|
||||
* [ ] Mobile responsiveness tested (if applicable)
|
||||
* [ ] Accessibility requirements met (if applicable)
|
||||
|
||||
**Process Completion:**
|
||||
* [ ] Sprint playbook fully updated with final status
|
||||
* [ ] All DoD items completed (AI-responsible only)
|
||||
* [ ] Lessons learned documented
|
||||
* [ ] Technical debt identified and documented
|
||||
* [ ] Follow-up tasks identified for future sprints
|
||||
|
||||
---
|
||||
|
||||
## 4. Advanced Communication & Collaboration
|
||||
|
||||
### 4.1 Proactive Communication
|
||||
|
||||
**REGULAR UPDATE CADENCE:**
|
||||
* **After each story completion**: Brief summary of what was delivered
|
||||
* **Daily progress**: Current focus, any blockers or risks identified
|
||||
* **Weekly demo**: Show working features, gather feedback
|
||||
* **Issue discovery**: Immediate notification with context and options
|
||||
|
||||
**COMMUNICATION TEMPLATES:**
|
||||
|
||||
**Story Completion Update:**
|
||||
```
|
||||
✅ **US-X Completed**: [Story Title]
|
||||
|
||||
**Delivered**: [Brief description of functionality]
|
||||
**Testing Status**: Ready for user testing
|
||||
**Next**: Proceeding to US-Y or awaiting test results
|
||||
|
||||
**Demo**: [Link or instructions to see the feature]
|
||||
```
|
||||
|
||||
**Blocker Notification:**
|
||||
```
|
||||
🚫 **Blocker Identified**: US-X blocked
|
||||
|
||||
**Issue**: [Specific technical problem]
|
||||
**Impact**: [Effect on timeline/other stories]
|
||||
**Options**:
|
||||
1. [Approach A with pros/cons]
|
||||
2. [Approach B with pros/cons]
|
||||
3. [Wait for user guidance]
|
||||
|
||||
**Recommendation**: [AI's preferred approach with rationale]
|
||||
```
|
||||
|
||||
### 4.2 Decision-Making Support
|
||||
|
||||
**TECHNICAL DECISION FRAMEWORK:**
|
||||
When multiple implementation approaches exist:
|
||||
1. **Present Options**: 2-3 concrete alternatives
|
||||
2. **Analyze Trade-offs**: Performance, complexity, maintainability
|
||||
3. **Recommend Approach**: Based on sprint goals and project context
|
||||
4. **Document Decision**: Rationale for future reference
|
||||
|
||||
**EXAMPLE DECISION DOCUMENTATION:**
|
||||
```
|
||||
**Decision**: Use Zustand vs Redux for state management
|
||||
**Context**: US-4 requires client-side state for user preferences
|
||||
**Options Considered**:
|
||||
1. Redux Toolkit (robust, well-known, higher complexity)
|
||||
2. Zustand (simpler, smaller bundle, less ecosystem)
|
||||
3. React Context (native, limited functionality)
|
||||
**Decision**: Zustand
|
||||
**Rationale**: Simpler API matches sprint timeline, sufficient features, smaller bundle size aligns with performance goals
|
||||
**Risks**: Smaller ecosystem, team unfamiliarity
|
||||
```
|
||||
|
||||
### 4.3 Knowledge Transfer & Documentation
|
||||
|
||||
**TECHNICAL DOCUMENTATION REQUIREMENTS:**
|
||||
* **API Changes**: Update OpenAPI specs, endpoint documentation
|
||||
* **Database Changes**: Document schema modifications, migration scripts
|
||||
* **Configuration Changes**: Update environment variables, deployment notes
|
||||
* **Architecture Decisions**: Record significant design choices and rationale
|
||||
|
||||
**USER-FACING DOCUMENTATION:**
|
||||
* **Feature Guides**: How to use new functionality
|
||||
* **Troubleshooting**: Common issues and solutions
|
||||
* **Migration Guides**: Changes that affect existing workflows
|
||||
|
||||
---
|
||||
|
||||
## 5. Risk Management & Continuous Improvement
|
||||
|
||||
### 5.1 Proactive Risk Identification
|
||||
|
||||
**TECHNICAL RISK MONITORING:**
|
||||
* **Dependency Issues**: New vulnerabilities, breaking changes
|
||||
* **Performance Degradation**: Build times, runtime performance
|
||||
* **Integration Problems**: API changes, service availability
|
||||
* **Browser Compatibility**: New features requiring polyfills
|
||||
|
||||
**MITIGATION STRATEGIES:**
|
||||
* **Fallback Implementations**: Alternative approaches for high-risk features
|
||||
* **Progressive Enhancement**: Core functionality first, enhancements after
|
||||
* **Feature Flags**: Ability to disable problematic features quickly
|
||||
* **Rollback Plans**: Clear steps to revert changes if needed
|
||||
|
||||
### 5.2 Continuous Quality Monitoring
|
||||
|
||||
**AUTOMATED QUALITY CHECKS:**
|
||||
* **Code Quality**: Complexity metrics, duplication detection
|
||||
* **Security**: Regular dependency scanning, code analysis
|
||||
* **Performance**: Bundle size tracking, runtime benchmarks
|
||||
* **Accessibility**: Automated a11y testing where possible
|
||||
|
||||
**QUALITY TREND TRACKING:**
|
||||
* **Test Coverage**: Maintain or improve coverage percentages
|
||||
* **Build Performance**: Monitor CI/CD pipeline execution times
|
||||
* **Code Maintainability**: Track complexity and technical debt metrics
|
||||
|
||||
### 5.3 Sprint Retrospective & Learning
|
||||
|
||||
**LESSONS LEARNED CAPTURE:**
|
||||
* **What Worked Well**: Effective practices, good decisions, helpful tools
|
||||
* **What Could Improve**: Bottlenecks, inefficiencies, communication gaps
|
||||
* **Unexpected Discoveries**: New insights about technology, domain, or process
|
||||
* **Technical Debt Created**: Shortcuts taken that need future attention
|
||||
|
||||
**ACTION ITEMS FOR FUTURE SPRINTS:**
|
||||
* **Process Improvements**: Changes to workflow or guidelines
|
||||
* **Tool Enhancements**: Better automation, monitoring, or development tools
|
||||
* **Knowledge Gaps**: Areas needing research or training
|
||||
* **Architecture Evolution**: System improvements based on learned constraints
|
||||
|
||||
**KNOWLEDGE SHARING:**
|
||||
* **Technical Patterns**: Successful implementations to reuse
|
||||
* **Common Pitfalls**: Issues to avoid in similar work
|
||||
* **Best Practices**: Refined approaches for specific scenarios
|
||||
* **Tool Recommendations**: Effective libraries, utilities, or techniques
|
||||
|
||||
---
|
||||
|
||||
## 6. Enhanced Error Handling & Recovery
|
||||
|
||||
### 6.1 Comprehensive Error Classification
|
||||
|
||||
**TECHNICAL ERRORS:**
|
||||
* **Build/Compilation Failures**: Syntax errors, missing dependencies
|
||||
* **Test Failures**: Unit test failures, integration problems
|
||||
* **Runtime Errors**: Application crashes, unexpected behaviors
|
||||
* **Configuration Issues**: Environment variables, deployment problems
|
||||
|
||||
**PROCESS ERRORS:**
|
||||
* **Requirement Ambiguity**: Unclear or conflicting specifications
|
||||
* **Dependency Delays**: External services, team dependencies
|
||||
* **Scope Creep**: Requirements expanding beyond original plan
|
||||
* **Resource Constraints**: Time, complexity, or capability limits
|
||||
|
||||
### 6.2 Enhanced Recovery Protocols
|
||||
|
||||
**ERROR RESPONSE FRAMEWORK:**
|
||||
1. **Stop & Assess**: Immediately halt progress, assess impact
|
||||
2. **Document**: Record exact error, context, and attempted solutions
|
||||
3. **Classify**: Determine if AI can resolve or requires user input
|
||||
4. **Communicate**: Notify user with specific details and options
|
||||
5. **Wait or Act**: Follow user guidance or implement approved solution
|
||||
6. **Verify**: Confirm resolution and prevent recurrence
|
||||
|
||||
**USER COMMUNICATION TEMPLATES:**
|
||||
|
||||
**Build Failure:**
|
||||
```
|
||||
🚫 **Build Failure**: US-X implementation blocked
|
||||
|
||||
**Error**: [Exact error message]
|
||||
**Context**: [What was being attempted]
|
||||
**Analysis**: [Root cause analysis]
|
||||
**Impact**: [Effect on current story and sprint timeline]
|
||||
|
||||
**Proposed Solution**: [Specific fix approach]
|
||||
**Alternative**: [Backup approach if available]
|
||||
|
||||
**Request**: Please confirm preferred resolution approach
|
||||
```
|
||||
|
||||
**Test Failure:**
|
||||
```
|
||||
🚫 **Test Failure**: Automated tests failing for US-X
|
||||
|
||||
**Failed Tests**: [List of specific test cases]
|
||||
**Error Details**: [Test failure messages]
|
||||
**Analysis**: [Why tests are failing]
|
||||
**Options**:
|
||||
1. Fix implementation to pass existing tests
|
||||
2. Update tests to match new requirements
|
||||
3. Investigate if test assumptions are incorrect
|
||||
|
||||
**Recommendation**: [AI's preferred approach]
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 7. Sprint Completion & Handover
|
||||
|
||||
### 7.1 Comprehensive Sprint Wrap-up
|
||||
|
||||
**FINAL DELIVERABLES CHECKLIST:**
|
||||
* [ ] **Code Repository**: All commits pushed to sprint branch
|
||||
* [ ] **Documentation**: All technical and user docs updated
|
||||
* [ ] **Test Results**: All automated tests passing, user testing complete
|
||||
* [ ] **Sprint Playbook**: Status updated, lessons learned captured
|
||||
* [ ] **Technical Debt**: Identified and documented for future sprints
|
||||
* [ ] **Knowledge Transfer**: Key technical decisions documented
|
||||
|
||||
**QUALITY VERIFICATION:**
|
||||
* [ ] **Security Review**: No critical vulnerabilities introduced
|
||||
* [ ] **Performance**: No regressions in key metrics
|
||||
* [ ] **Accessibility**: Standards maintained or improved
|
||||
* [ ] **Browser Support**: Compatibility verified for target browsers
|
||||
* [ ] **Mobile Support**: Responsive design tested (if applicable)
|
||||
|
||||
### 7.2 User Handover Process
|
||||
|
||||
**HANDOVER DOCUMENTATION:**
|
||||
1. **Feature Summary**: What was built and why
|
||||
2. **Usage Instructions**: How to use new functionality
|
||||
3. **Testing Summary**: What was tested and results
|
||||
4. **Known Limitations**: Any scope boundaries or technical constraints
|
||||
5. **Future Recommendations**: Suggested enhancements or improvements
|
||||
|
||||
**USER RESPONSIBILITIES POST-HANDOVER:**
|
||||
* **Code Review**: Review changes before merging to main
|
||||
* **Integration Testing**: Test with full system in staging environment
|
||||
* **Production Deployment**: Deploy to production environment
|
||||
* **User Acceptance**: Final approval of delivered functionality
|
||||
* **Performance Monitoring**: Monitor system behavior post-deployment
|
||||
|
||||
### 7.3 Success Metrics & Evaluation
|
||||
|
||||
**SPRINT SUCCESS CRITERIA:**
|
||||
* [ ] **Goal Achievement**: Primary sprint objective met
|
||||
* [ ] **Story Completion**: All must-have stories completed
|
||||
* [ ] **Quality Standards**: No critical bugs, performance met
|
||||
* [ ] **Timeline**: Completed within estimated timeframe
|
||||
* [ ] **User Satisfaction**: Delivered functionality meets expectations
|
||||
|
||||
**CONTINUOUS IMPROVEMENT METRICS:**
|
||||
* **Estimation Accuracy**: How close time estimates were to actual
|
||||
* **Issue Discovery Rate**: Number of bugs found post-completion
|
||||
* **User Feedback Quality**: Satisfaction with delivered features
|
||||
* **Technical Debt Impact**: Amount of debt created vs. value delivered
|
||||
|
||||
---
|
||||
|
||||
This improved framework enables more effective sprint execution through better planning, enhanced communication, proactive risk management, and comprehensive quality assurance while maintaining the successful patterns from Sprint 01.
|
||||
|
||||
---
|
||||
232
docs/framework/sprint-playbook-template-improved.md
Normal file
232
docs/framework/sprint-playbook-template-improved.md
Normal file
@@ -0,0 +1,232 @@
|
||||
# 📑 Sprint Playbook Template (Improved)
|
||||
|
||||
## 0. Sprint Status
|
||||
|
||||
```
|
||||
Status: [🔲 not started | 🚧 in progress | 🛠️ implementing <user story id> | ✅ completed | 🚫 blocked]
|
||||
```
|
||||
|
||||
**Current Focus**: [Brief description of what's actively being worked on]
|
||||
|
||||
---
|
||||
|
||||
## 1. Sprint Metadata
|
||||
|
||||
* **Sprint ID:** [unique identifier]
|
||||
* **Start Date:** [YYYY-MM-DD]
|
||||
* **End Date:** [YYYY-MM-DD]
|
||||
* **Sprint Goal:** [clear and concise goal statement]
|
||||
* **Team/Agent Responsible:** [AI agent name/version]
|
||||
* **Branch Name (Git):** [feature/sprint-<id>-<short-description>]
|
||||
* **Estimated Complexity:** [Simple | Medium | Complex]
|
||||
* **Dependencies:** [List any blocking dependencies identified up front]
|
||||
|
||||
---
|
||||
|
||||
## 2. Current State of Software
|
||||
|
||||
*(Comprehensive snapshot of the project before Sprint work begins)*
|
||||
|
||||
* **Main Features Available:** [list with brief descriptions]
|
||||
* **Known Limitations / Issues:** [list with impact assessment]
|
||||
* **Relevant Files / Modules:** [list with paths and purposes]
|
||||
* **Environment / Dependencies:** [runtime versions, frameworks, libs with versions]
|
||||
* **Testing Infrastructure:** [available test frameworks, coverage tools, CI/CD status]
|
||||
* **Documentation Status:** [current state of docs, known gaps]
|
||||
|
||||
---
|
||||
|
||||
## 3. Desired State of Software
|
||||
|
||||
*(Target state after Sprint is complete)*
|
||||
|
||||
* **New Features:** [list with specific functionality descriptions]
|
||||
* **Modified Features:** [list with before/after behavior changes]
|
||||
* **Expected Behavior Changes:** [user-visible and system-level changes]
|
||||
* **External Dependencies / Integrations:** [new libraries, APIs, services with versions]
|
||||
* **Performance Expectations:** [any performance requirements or improvements]
|
||||
* **Security Considerations:** [security implications or requirements]
|
||||
|
||||
---
|
||||
|
||||
## 4. User Stories
|
||||
|
||||
Each story represents a **unit of work** that can be developed, tested, and verified independently.
|
||||
|
||||
| Story ID | Title | Description | Acceptance Criteria | Definition of Done | Assignee | Status | Est. Time | Dependencies |
|
||||
| -------- | ----- | ----------- | ------------------- | ------------------ | -------- | ------ | --------- | ------------ |
|
||||
| US-1 | [short title] | [detailed description of functionality] | [specific, testable conditions] | [implemented, tested, docs updated, lint clean, **user testing completed**] | [AI agent] | 🔲 todo | [hours] | [story IDs] |
|
||||
| US-2 | ... | ... | ... | ... | ... | 🔲 todo | [hours] | [story IDs] |
|
||||
|
||||
**Status options:** `🔲 todo`, `🚧 in progress`, `🚫 blocked`, `✅ done`
|
||||
|
||||
**Story Priority Matrix:**
|
||||
- **Must Have**: Core functionality required for sprint success
|
||||
- **Should Have**: Important features that add significant value
|
||||
- **Could Have**: Nice-to-have features if time permits
|
||||
- **Won't Have**: Explicitly out of scope for this sprint
|
||||
|
||||
---
|
||||
|
||||
## 5. Technical Instructions
|
||||
|
||||
*(Comprehensive guidance to help AI converge quickly on the correct solution)*
|
||||
|
||||
* **Code Snippets / Patterns:**
|
||||
|
||||
```typescript
|
||||
// Example pattern with actual syntax from project
|
||||
export const exampleFunction = async (params: ExampleType): Promise<ResultType> => {
|
||||
// Implementation pattern following project conventions
|
||||
};
|
||||
```
|
||||
|
||||
* **Architecture Guidelines:**
|
||||
- [layering principles, module boundaries, design patterns]
|
||||
- [data flow patterns, state management approaches]
|
||||
- [error handling conventions, logging patterns]
|
||||
|
||||
* **Coding Style Conventions:**
|
||||
- [naming rules: camelCase, PascalCase, kebab-case usage]
|
||||
- [formatting: prettier, eslint rules]
|
||||
- [file organization, import/export patterns]
|
||||
|
||||
* **Testing Strategy:**
|
||||
- [unit/integration/e2e testing approach]
|
||||
- [testing framework and utilities to use]
|
||||
- [coverage targets and quality gates]
|
||||
- [manual testing checkpoints and user validation requirements]
|
||||
|
||||
* **Internationalization (i18n):**
|
||||
- [translation key patterns and placement]
|
||||
- [supported locales and fallback strategies]
|
||||
- [client vs server-side translation approaches]
|
||||
|
||||
* **Performance Considerations:**
|
||||
- [bundle size targets, lazy loading strategies]
|
||||
- [database query optimization patterns]
|
||||
- [caching strategies and invalidation]
|
||||
|
||||
---
|
||||
|
||||
## 6. Risks and Dependencies
|
||||
|
||||
* **Technical Risks:**
|
||||
- [API compatibility issues, framework limitations]
|
||||
- [Performance bottlenecks, scalability concerns]
|
||||
- [Browser compatibility, device-specific issues]
|
||||
|
||||
* **Integration Risks:**
|
||||
- [Third-party service dependencies]
|
||||
- [Database migration or schema change needs]
|
||||
- [Authentication/authorization complexity]
|
||||
|
||||
* **Timeline Risks:**
|
||||
- [Unknown complexity areas]
|
||||
- [Potential scope creep triggers]
|
||||
- [External dependency availability]
|
||||
|
||||
* **Dependencies:**
|
||||
- [other modules, external services, libraries]
|
||||
- [team inputs, design assets, API documentation]
|
||||
- [infrastructure or deployment requirements]
|
||||
|
||||
* **Mitigation Strategies:**
|
||||
- [fallback approaches for high-risk items]
|
||||
- [spike work to reduce uncertainty]
|
||||
- [simplified alternatives if main approach fails]
|
||||
|
||||
---
|
||||
|
||||
## 7. Quality Gates
|
||||
|
||||
* **Code Quality:**
|
||||
- [ ] All code follows project style guidelines
|
||||
- [ ] No linting errors or warnings
|
||||
- [ ] Code compiles without errors
|
||||
- [ ] No security vulnerabilities introduced
|
||||
|
||||
* **Testing Quality:**
|
||||
- [ ] Unit tests cover new functionality
|
||||
- [ ] Integration points are tested
|
||||
- [ ] Manual testing completed by user
|
||||
- [ ] Regression testing passed
|
||||
|
||||
* **Documentation Quality:**
|
||||
- [ ] Code comments added/updated
|
||||
- [ ] README or API docs updated
|
||||
- [ ] User-facing documentation updated
|
||||
- [ ] Technical decisions documented
|
||||
|
||||
---
|
||||
|
||||
## 8. Sprint Definition of Done (DoD)
|
||||
|
||||
The Sprint is complete when:
|
||||
|
||||
**AI-Responsible Items** (AI agent can verify and tick):
|
||||
* [ ] All user stories meet their individual Definition of Done
|
||||
* [ ] All quality gates passed
|
||||
* [ ] Code compiles and passes automated tests
|
||||
* [ ] Code formatting validated (npm run prettier:check)
|
||||
* [ ] Code is committed and pushed on branch `feature/sprint-<id>`
|
||||
* [ ] Documentation is updated
|
||||
* [ ] Sprint status updated to `✅ completed`
|
||||
* [ ] No critical bugs or blockers remain
|
||||
* [ ] Performance meets specified requirements
|
||||
* [ ] Security review completed (if applicable)
|
||||
|
||||
**User-Only Items** (Only user can verify and tick):
|
||||
* [ ] Branch is merged into main
|
||||
* [ ] User acceptance testing completed
|
||||
* [ ] Production deployment completed (if applicable)
|
||||
* [ ] External system integrations verified (if applicable)
|
||||
* [ ] Stakeholder sign-off received
|
||||
* [ ] Performance validated in production environment
|
||||
|
||||
**Success Metrics:**
|
||||
* [ ] Sprint goal achieved
|
||||
* [ ] All must-have stories completed
|
||||
* [ ] No regression bugs introduced
|
||||
* [ ] User satisfaction with delivered functionality
|
||||
|
||||
---
|
||||
|
||||
## 9. Lessons Learned & Retrospective
|
||||
|
||||
*(To be filled during/after sprint execution)*
|
||||
|
||||
**What Went Well:**
|
||||
- [successes, good decisions, effective processes]
|
||||
|
||||
**What Could Be Improved:**
|
||||
- [challenges faced, inefficiencies, areas for optimization]
|
||||
|
||||
**Action Items for Future Sprints:**
|
||||
- [specific improvements to implement next time]
|
||||
|
||||
**Technical Debt Created:**
|
||||
- [shortcuts taken that need future attention]
|
||||
|
||||
**Knowledge Gained:**
|
||||
- [new learnings about technology, domain, or processes]
|
||||
|
||||
---
|
||||
|
||||
## 10. Communication & Coordination
|
||||
|
||||
**Stakeholder Updates:**
|
||||
- [frequency and format of progress updates]
|
||||
- [key decision points requiring user input]
|
||||
|
||||
**Testing Coordination:**
|
||||
- [when to request user testing]
|
||||
- [what specific scenarios to test]
|
||||
- [how to report and track issues]
|
||||
|
||||
**Blocker Escalation:**
|
||||
- [how to handle technical blockers]
|
||||
- [when to pause vs. continue with alternative approaches]
|
||||
- [communication protocol for critical issues]
|
||||
|
||||
---
|
||||
Reference in New Issue
Block a user