Create enhanced versions of sprint framework files incorporating key improvements: **Sprint Playbook Template (Improved):** - Add comprehensive status tracking with current focus and complexity estimation - Enhance quality gates for code, testing, and documentation - Include proactive risk mitigation strategies with fallback approaches - Add lessons learned and retrospective sections for continuous improvement - Define clear communication protocols and success metrics **How-to-Use Guide (Improved):** - Implement advanced clarity checking to identify ambiguities before starting - Add comprehensive project analysis including testing infrastructure assessment - Enhance story breakdown with Given/When/Then format and dependency tracking - Include proactive risk management with mitigation strategies - Define quality gates for automated and manual verification - Add iterative improvement process for framework refinement **Implementation Guidelines (Improved):** - Add structured testing checkpoint protocol with user feedback formats - Implement iterative refinement process for handling user feedback - Enhance communication with proactive updates and blocker notifications - Add advanced error handling with classification and recovery protocols - Include knowledge transfer and technical decision documentation - Add continuous quality monitoring with automated checks These improvements generalize lessons from Sprint 01 successful execution: - Better user collaboration through structured testing checkpoints - Enhanced risk management with proactive identification and mitigation - Comprehensive quality assurance across multiple levels - Systematic knowledge capture and process optimization - Clear scope management and change control procedures 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
524 lines
18 KiB
Markdown
524 lines
18 KiB
Markdown
# 📘 Sprint Implementation Guidelines (Improved)
|
|
|
|
These enhanced guidelines define how the AI agent must implement a Sprint based on the approved **Sprint Playbook**.
|
|
They ensure consistent execution, comprehensive testing, proactive risk management, and optimal user collaboration.
|
|
|
|
---
|
|
|
|
## 0. Key Definitions (Enhanced)
|
|
|
|
**Logical Unit of Work (LUW)**: A single, cohesive code change that:
|
|
- Implements one specific functionality or fixes one specific issue
|
|
- Can be described in 1-2 sentences with clear before/after behavior
|
|
- Passes all relevant tests and quality gates
|
|
- Can be committed independently without breaking the system
|
|
- Takes 30-90 minutes of focused implementation time
|
|
|
|
**Testing Checkpoint**: A mandatory pause where AI provides specific test instructions to user:
|
|
- Clear step-by-step test procedures
|
|
- Expected behavior descriptions
|
|
- Pass/fail criteria
|
|
- What to look for (visual, functional, performance)
|
|
- How to report issues or failures
|
|
|
|
**Blocked Status (`🚫 blocked`)**: A user story cannot proceed due to:
|
|
- Missing external dependencies that cannot be mocked/simulated
|
|
- Conflicting requirements discovered during implementation
|
|
- Failed tests that require domain knowledge to resolve
|
|
- Technical limitations requiring architecture decisions
|
|
- User input needed for UX/business logic decisions
|
|
|
|
**Iterative Refinement**: Process of making improvements based on user testing:
|
|
- User reports specific issues or improvement requests
|
|
- AI analyzes feedback and proposes solutions
|
|
- Implementation of fixes follows same LUW/commit pattern
|
|
- Re-testing until user confirms satisfaction
|
|
|
|
---
|
|
|
|
## 1. Enhanced Git & Version Control Rules
|
|
|
|
### 1.1 Commit Granularity (Refined)
|
|
|
|
**MANDATORY COMMIT PATTERNS:**
|
|
* **Feature Implementation**: One LUW per commit
|
|
* **Test Addition**: Included in same commit as feature (not separate)
|
|
* **Documentation Updates**: Included in same commit as related code changes
|
|
* **Bug Fixes**: One fix per commit with clear reproduction steps in message
|
|
* **Refactoring**: Separate commits, no functional changes mixed in
|
|
|
|
**COMMIT QUALITY CRITERIA:**
|
|
* Build passes after each commit
|
|
* Tests pass after each commit
|
|
* No temporary/debug code included
|
|
* All related files updated (tests, docs, types)
|
|
|
|
### 1.2 Commit Message Style (Enhanced)
|
|
|
|
**STRICT FORMAT:**
|
|
```
|
|
<type>(scope): <subject>
|
|
|
|
<body>
|
|
|
|
<footer>
|
|
```
|
|
|
|
**ENHANCED TYPES:**
|
|
* `feat`: New feature implementation
|
|
* `fix`: Bug fix or issue resolution
|
|
* `refactor`: Code restructuring without behavior change
|
|
* `perf`: Performance improvement
|
|
* `test`: Test addition or modification
|
|
* `docs`: Documentation updates
|
|
* `style`: Code formatting (no logic changes)
|
|
* `chore`: Maintenance tasks (dependency updates, build changes)
|
|
* `i18n`: Internationalization changes
|
|
* `a11y`: Accessibility improvements
|
|
|
|
**IMPROVED EXAMPLES:**
|
|
```
|
|
feat(auth): implement JWT token validation middleware
|
|
|
|
Add Express middleware for JWT token verification:
|
|
- Validates token signature using HS256 algorithm
|
|
- Extracts user ID and role from payload
|
|
- Returns 401 for missing/invalid/expired tokens
|
|
- Adds user context to request object for downstream handlers
|
|
|
|
Refs: US-3
|
|
Tests: Added unit tests for middleware edge cases
|
|
```
|
|
|
|
### 1.3 Branch Management (Enhanced)
|
|
|
|
**BRANCH NAMING CONVENTION:**
|
|
```
|
|
feature/sprint-<id>-<goal-slug>
|
|
```
|
|
Examples:
|
|
- `feature/sprint-01-barcode-print`
|
|
- `feature/sprint-02-user-auth`
|
|
- `feature/sprint-03-api-optimization`
|
|
|
|
**BRANCH LIFECYCLE:**
|
|
1. **Creation**: Branch from latest `main` with clean history
|
|
2. **Development**: Regular commits following LUW pattern
|
|
3. **Integration**: Rebase against `main` before completion
|
|
4. **Handover**: All tests passing, documentation complete
|
|
5. **Merge**: User responsibility, AI never merges
|
|
|
|
### 1.4 Advanced Commit Strategies
|
|
|
|
**RELATED CHANGE GROUPING:**
|
|
```
|
|
# Single commit for cohesive changes:
|
|
feat(print): add barcode table with print optimization
|
|
- Implement 3-column table layout for barcode display
|
|
- Add CSS print media queries for A4 paper
|
|
- Include responsive design for screen viewing
|
|
- Add comprehensive i18n support (EN/HR)
|
|
|
|
Refs: US-3, US-4
|
|
```
|
|
|
|
**DEPENDENCY TRACKING IN COMMITS:**
|
|
```
|
|
feat(auth): implement user session management
|
|
|
|
Builds upon JWT middleware from previous commit.
|
|
Requires database migration for session storage.
|
|
|
|
Refs: US-5
|
|
Depends-On: US-3 (JWT middleware)
|
|
```
|
|
|
|
---
|
|
|
|
## 2. Enhanced Testing & Quality Assurance
|
|
|
|
### 2.1 Multi-Level Testing Strategy
|
|
|
|
**AUTOMATED TESTING (AI Responsibility):**
|
|
* **Unit Tests**: Individual functions, components, utilities
|
|
* **Integration Tests**: API endpoints, database operations
|
|
* **Build Tests**: Compilation, bundling, linting
|
|
* **Security Tests**: Dependency vulnerability scanning
|
|
|
|
**MANUAL TESTING (User Responsibility with AI Guidance):**
|
|
* **Functional Testing**: Feature behavior verification
|
|
* **UI/UX Testing**: Visual appearance and user interaction
|
|
* **Cross-browser Testing**: Compatibility verification
|
|
* **Performance Testing**: Load times, responsiveness
|
|
* **Accessibility Testing**: Screen reader, keyboard navigation
|
|
|
|
### 2.2 Testing Checkpoint Protocol
|
|
|
|
**WHEN TO TRIGGER TESTING CHECKPOINTS:**
|
|
1. After each user-facing user story completion
|
|
2. After significant technical changes that affect user experience
|
|
3. Before major integrations or API changes
|
|
4. When user reports issues requiring iteration
|
|
|
|
**TESTING INSTRUCTION FORMAT:**
|
|
```markdown
|
|
## 🧪 USER TESTING CHECKPOINT - [Story ID]
|
|
|
|
**[Story Title]** is implemented and ready for testing.
|
|
|
|
### **Test Steps:**
|
|
1. [Specific action]: Navigate to X, click Y, enter Z
|
|
2. [Verification step]: Check that A appears, B behaves correctly
|
|
3. [Edge case]: Try invalid input W, verify error message
|
|
|
|
### **Expected Behavior:**
|
|
- ✅ [Specific outcome]: X should display Y with Z properties
|
|
- ✅ [Performance expectation]: Page should load in under 2 seconds
|
|
- ✅ [Error handling]: Invalid inputs show appropriate messages
|
|
|
|
### **What to Test:**
|
|
1. **[Category]**: [Specific items to verify]
|
|
2. **[Category]**: [Specific items to verify]
|
|
|
|
### **Please report:**
|
|
- ✅ **PASS** if all expected behaviors work correctly
|
|
- ❌ **FAIL** with specific details about what's wrong
|
|
|
|
**Expected Result**: [Summary of successful test outcome]
|
|
```
|
|
|
|
### 2.3 Issue Resolution Workflow
|
|
|
|
**USER ISSUE REPORTING FORMAT:**
|
|
```
|
|
❌ **FAIL**: [Brief description]
|
|
|
|
**Issue Details:**
|
|
- [Specific problem observed]
|
|
- [Steps that led to issue]
|
|
- [Expected vs actual behavior]
|
|
- [Browser/device info if relevant]
|
|
- [Error messages if any]
|
|
|
|
**Priority**: [High/Medium/Low]
|
|
```
|
|
|
|
**AI RESPONSE PROTOCOL:**
|
|
1. **Acknowledge**: Confirm understanding of reported issue
|
|
2. **Analyze**: Identify root cause and scope of fix needed
|
|
3. **Propose**: Suggest solution approach for user approval
|
|
4. **Implement**: Make fixes following LUW pattern
|
|
5. **Re-test**: Request focused testing on fix area
|
|
|
|
---
|
|
|
|
## 3. Enhanced Status Management & Tracking
|
|
|
|
### 3.1 Real-Time Status Updates
|
|
|
|
**STORY STATUS TRANSITIONS:**
|
|
```
|
|
🔲 todo → 🚧 in progress → ✅ done
|
|
↓
|
|
🚫 blocked (requires user intervention)
|
|
```
|
|
|
|
**SPRINT STATUS GRANULARITY:**
|
|
```
|
|
🔲 not started → 🚧 in progress → 🛠️ implementing US-X → ✅ completed
|
|
↓
|
|
🚫 blocked on US-X
|
|
```
|
|
|
|
**STATUS UPDATE TIMING:**
|
|
* **Story Start**: Update status in first commit containing story work
|
|
* **Story Completion**: Update status in final commit for story
|
|
* **Blocker Discovery**: Update immediately when blocker identified
|
|
* **Sprint Completion**: Update after all stories completed and tested
|
|
|
|
### 3.2 Progress Tracking Enhancement
|
|
|
|
**COMMIT-LEVEL TRACKING:**
|
|
```
|
|
feat(auth): implement login form validation
|
|
|
|
Progress: US-2 implementation started
|
|
Completed: Form layout, basic validation rules
|
|
Remaining: Error handling, success flow, tests
|
|
|
|
Refs: US-2 (40% complete)
|
|
```
|
|
|
|
**STORY DEPENDENCY TRACKING:**
|
|
* Update playbook to show which dependencies are complete
|
|
* Identify critical path blockers early
|
|
* Communicate timeline impacts of dependency delays
|
|
|
|
### 3.3 Sprint Completion Criteria (Enhanced)
|
|
|
|
**COMPREHENSIVE COMPLETION CHECKLIST:**
|
|
|
|
**Technical Completion:**
|
|
* [ ] All code committed to sprint branch with clean history
|
|
* [ ] Build passes without errors or warnings
|
|
* [ ] All automated tests pass with required coverage
|
|
* [ ] Security scan shows no critical vulnerabilities
|
|
* [ ] Performance benchmarks meet requirements
|
|
* [ ] Documentation updated and accurate
|
|
|
|
**Quality Assurance Completion:**
|
|
* [ ] All user stories tested and approved by user
|
|
* [ ] No critical bugs or usability issues remain
|
|
* [ ] Cross-browser compatibility verified (if applicable)
|
|
* [ ] Mobile responsiveness tested (if applicable)
|
|
* [ ] Accessibility requirements met (if applicable)
|
|
|
|
**Process Completion:**
|
|
* [ ] Sprint playbook fully updated with final status
|
|
* [ ] All DoD items completed (AI-responsible only)
|
|
* [ ] Lessons learned documented
|
|
* [ ] Technical debt identified and documented
|
|
* [ ] Follow-up tasks identified for future sprints
|
|
|
|
---
|
|
|
|
## 4. Advanced Communication & Collaboration
|
|
|
|
### 4.1 Proactive Communication
|
|
|
|
**REGULAR UPDATE CADENCE:**
|
|
* **After each story completion**: Brief summary of what was delivered
|
|
* **Daily progress**: Current focus, any blockers or risks identified
|
|
* **Weekly demo**: Show working features, gather feedback
|
|
* **Issue discovery**: Immediate notification with context and options
|
|
|
|
**COMMUNICATION TEMPLATES:**
|
|
|
|
**Story Completion Update:**
|
|
```
|
|
✅ **US-X Completed**: [Story Title]
|
|
|
|
**Delivered**: [Brief description of functionality]
|
|
**Testing Status**: Ready for user testing
|
|
**Next**: Proceeding to US-Y or awaiting test results
|
|
|
|
**Demo**: [Link or instructions to see the feature]
|
|
```
|
|
|
|
**Blocker Notification:**
|
|
```
|
|
🚫 **Blocker Identified**: US-X blocked
|
|
|
|
**Issue**: [Specific technical problem]
|
|
**Impact**: [Effect on timeline/other stories]
|
|
**Options**:
|
|
1. [Approach A with pros/cons]
|
|
2. [Approach B with pros/cons]
|
|
3. [Wait for user guidance]
|
|
|
|
**Recommendation**: [AI's preferred approach with rationale]
|
|
```
|
|
|
|
### 4.2 Decision-Making Support
|
|
|
|
**TECHNICAL DECISION FRAMEWORK:**
|
|
When multiple implementation approaches exist:
|
|
1. **Present Options**: 2-3 concrete alternatives
|
|
2. **Analyze Trade-offs**: Performance, complexity, maintainability
|
|
3. **Recommend Approach**: Based on sprint goals and project context
|
|
4. **Document Decision**: Rationale for future reference
|
|
|
|
**EXAMPLE DECISION DOCUMENTATION:**
|
|
```
|
|
**Decision**: Use Zustand vs Redux for state management
|
|
**Context**: US-4 requires client-side state for user preferences
|
|
**Options Considered**:
|
|
1. Redux Toolkit (robust, well-known, higher complexity)
|
|
2. Zustand (simpler, smaller bundle, less ecosystem)
|
|
3. React Context (native, limited functionality)
|
|
**Decision**: Zustand
|
|
**Rationale**: Simpler API matches sprint timeline, sufficient features, smaller bundle size aligns with performance goals
|
|
**Risks**: Smaller ecosystem, team unfamiliarity
|
|
```
|
|
|
|
### 4.3 Knowledge Transfer & Documentation
|
|
|
|
**TECHNICAL DOCUMENTATION REQUIREMENTS:**
|
|
* **API Changes**: Update OpenAPI specs, endpoint documentation
|
|
* **Database Changes**: Document schema modifications, migration scripts
|
|
* **Configuration Changes**: Update environment variables, deployment notes
|
|
* **Architecture Decisions**: Record significant design choices and rationale
|
|
|
|
**USER-FACING DOCUMENTATION:**
|
|
* **Feature Guides**: How to use new functionality
|
|
* **Troubleshooting**: Common issues and solutions
|
|
* **Migration Guides**: Changes that affect existing workflows
|
|
|
|
---
|
|
|
|
## 5. Risk Management & Continuous Improvement
|
|
|
|
### 5.1 Proactive Risk Identification
|
|
|
|
**TECHNICAL RISK MONITORING:**
|
|
* **Dependency Issues**: New vulnerabilities, breaking changes
|
|
* **Performance Degradation**: Build times, runtime performance
|
|
* **Integration Problems**: API changes, service availability
|
|
* **Browser Compatibility**: New features requiring polyfills
|
|
|
|
**MITIGATION STRATEGIES:**
|
|
* **Fallback Implementations**: Alternative approaches for high-risk features
|
|
* **Progressive Enhancement**: Core functionality first, enhancements after
|
|
* **Feature Flags**: Ability to disable problematic features quickly
|
|
* **Rollback Plans**: Clear steps to revert changes if needed
|
|
|
|
### 5.2 Continuous Quality Monitoring
|
|
|
|
**AUTOMATED QUALITY CHECKS:**
|
|
* **Code Quality**: Complexity metrics, duplication detection
|
|
* **Security**: Regular dependency scanning, code analysis
|
|
* **Performance**: Bundle size tracking, runtime benchmarks
|
|
* **Accessibility**: Automated a11y testing where possible
|
|
|
|
**QUALITY TREND TRACKING:**
|
|
* **Test Coverage**: Maintain or improve coverage percentages
|
|
* **Build Performance**: Monitor CI/CD pipeline execution times
|
|
* **Code Maintainability**: Track complexity and technical debt metrics
|
|
|
|
### 5.3 Sprint Retrospective & Learning
|
|
|
|
**LESSONS LEARNED CAPTURE:**
|
|
* **What Worked Well**: Effective practices, good decisions, helpful tools
|
|
* **What Could Improve**: Bottlenecks, inefficiencies, communication gaps
|
|
* **Unexpected Discoveries**: New insights about technology, domain, or process
|
|
* **Technical Debt Created**: Shortcuts taken that need future attention
|
|
|
|
**ACTION ITEMS FOR FUTURE SPRINTS:**
|
|
* **Process Improvements**: Changes to workflow or guidelines
|
|
* **Tool Enhancements**: Better automation, monitoring, or development tools
|
|
* **Knowledge Gaps**: Areas needing research or training
|
|
* **Architecture Evolution**: System improvements based on learned constraints
|
|
|
|
**KNOWLEDGE SHARING:**
|
|
* **Technical Patterns**: Successful implementations to reuse
|
|
* **Common Pitfalls**: Issues to avoid in similar work
|
|
* **Best Practices**: Refined approaches for specific scenarios
|
|
* **Tool Recommendations**: Effective libraries, utilities, or techniques
|
|
|
|
---
|
|
|
|
## 6. Enhanced Error Handling & Recovery
|
|
|
|
### 6.1 Comprehensive Error Classification
|
|
|
|
**TECHNICAL ERRORS:**
|
|
* **Build/Compilation Failures**: Syntax errors, missing dependencies
|
|
* **Test Failures**: Unit test failures, integration problems
|
|
* **Runtime Errors**: Application crashes, unexpected behaviors
|
|
* **Configuration Issues**: Environment variables, deployment problems
|
|
|
|
**PROCESS ERRORS:**
|
|
* **Requirement Ambiguity**: Unclear or conflicting specifications
|
|
* **Dependency Delays**: External services, team dependencies
|
|
* **Scope Creep**: Requirements expanding beyond original plan
|
|
* **Resource Constraints**: Time, complexity, or capability limits
|
|
|
|
### 6.2 Enhanced Recovery Protocols
|
|
|
|
**ERROR RESPONSE FRAMEWORK:**
|
|
1. **Stop & Assess**: Immediately halt progress, assess impact
|
|
2. **Document**: Record exact error, context, and attempted solutions
|
|
3. **Classify**: Determine if AI can resolve or requires user input
|
|
4. **Communicate**: Notify user with specific details and options
|
|
5. **Wait or Act**: Follow user guidance or implement approved solution
|
|
6. **Verify**: Confirm resolution and prevent recurrence
|
|
|
|
**USER COMMUNICATION TEMPLATES:**
|
|
|
|
**Build Failure:**
|
|
```
|
|
🚫 **Build Failure**: US-X implementation blocked
|
|
|
|
**Error**: [Exact error message]
|
|
**Context**: [What was being attempted]
|
|
**Analysis**: [Root cause analysis]
|
|
**Impact**: [Effect on current story and sprint timeline]
|
|
|
|
**Proposed Solution**: [Specific fix approach]
|
|
**Alternative**: [Backup approach if available]
|
|
|
|
**Request**: Please confirm preferred resolution approach
|
|
```
|
|
|
|
**Test Failure:**
|
|
```
|
|
🚫 **Test Failure**: Automated tests failing for US-X
|
|
|
|
**Failed Tests**: [List of specific test cases]
|
|
**Error Details**: [Test failure messages]
|
|
**Analysis**: [Why tests are failing]
|
|
**Options**:
|
|
1. Fix implementation to pass existing tests
|
|
2. Update tests to match new requirements
|
|
3. Investigate if test assumptions are incorrect
|
|
|
|
**Recommendation**: [AI's preferred approach]
|
|
```
|
|
|
|
---
|
|
|
|
## 7. Sprint Completion & Handover
|
|
|
|
### 7.1 Comprehensive Sprint Wrap-up
|
|
|
|
**FINAL DELIVERABLES CHECKLIST:**
|
|
* [ ] **Code Repository**: All commits pushed to sprint branch
|
|
* [ ] **Documentation**: All technical and user docs updated
|
|
* [ ] **Test Results**: All automated tests passing, user testing complete
|
|
* [ ] **Sprint Playbook**: Status updated, lessons learned captured
|
|
* [ ] **Technical Debt**: Identified and documented for future sprints
|
|
* [ ] **Knowledge Transfer**: Key technical decisions documented
|
|
|
|
**QUALITY VERIFICATION:**
|
|
* [ ] **Security Review**: No critical vulnerabilities introduced
|
|
* [ ] **Performance**: No regressions in key metrics
|
|
* [ ] **Accessibility**: Standards maintained or improved
|
|
* [ ] **Browser Support**: Compatibility verified for target browsers
|
|
* [ ] **Mobile Support**: Responsive design tested (if applicable)
|
|
|
|
### 7.2 User Handover Process
|
|
|
|
**HANDOVER DOCUMENTATION:**
|
|
1. **Feature Summary**: What was built and why
|
|
2. **Usage Instructions**: How to use new functionality
|
|
3. **Testing Summary**: What was tested and results
|
|
4. **Known Limitations**: Any scope boundaries or technical constraints
|
|
5. **Future Recommendations**: Suggested enhancements or improvements
|
|
|
|
**USER RESPONSIBILITIES POST-HANDOVER:**
|
|
* **Code Review**: Review changes before merging to main
|
|
* **Integration Testing**: Test with full system in staging environment
|
|
* **Production Deployment**: Deploy to production environment
|
|
* **User Acceptance**: Final approval of delivered functionality
|
|
* **Performance Monitoring**: Monitor system behavior post-deployment
|
|
|
|
### 7.3 Success Metrics & Evaluation
|
|
|
|
**SPRINT SUCCESS CRITERIA:**
|
|
* [ ] **Goal Achievement**: Primary sprint objective met
|
|
* [ ] **Story Completion**: All must-have stories completed
|
|
* [ ] **Quality Standards**: No critical bugs, performance met
|
|
* [ ] **Timeline**: Completed within estimated timeframe
|
|
* [ ] **User Satisfaction**: Delivered functionality meets expectations
|
|
|
|
**CONTINUOUS IMPROVEMENT METRICS:**
|
|
* **Estimation Accuracy**: How close time estimates were to actual
|
|
* **Issue Discovery Rate**: Number of bugs found post-completion
|
|
* **User Feedback Quality**: Satisfaction with delivered features
|
|
* **Technical Debt Impact**: Amount of debt created vs. value delivered
|
|
|
|
---
|
|
|
|
This improved framework enables more effective sprint execution through better planning, enhanced communication, proactive risk management, and comprehensive quality assurance while maintaining the successful patterns from Sprint 01.
|
|
|
|
--- |