docs: add improved sprint framework based on Sprint 01 lessons learned

Create enhanced versions of sprint framework files incorporating key improvements:

**Sprint Playbook Template (Improved):**
- Add comprehensive status tracking with current focus and complexity estimation
- Enhance quality gates for code, testing, and documentation
- Include proactive risk mitigation strategies with fallback approaches
- Add lessons learned and retrospective sections for continuous improvement
- Define clear communication protocols and success metrics

**How-to-Use Guide (Improved):**
- Implement advanced clarity checking to identify ambiguities before starting
- Add comprehensive project analysis including testing infrastructure assessment
- Enhance story breakdown with Given/When/Then format and dependency tracking
- Include proactive risk management with mitigation strategies
- Define quality gates for automated and manual verification
- Add iterative improvement process for framework refinement

**Implementation Guidelines (Improved):**
- Add structured testing checkpoint protocol with user feedback formats
- Implement iterative refinement process for handling user feedback
- Enhance communication with proactive updates and blocker notifications
- Add advanced error handling with classification and recovery protocols
- Include knowledge transfer and technical decision documentation
- Add continuous quality monitoring with automated checks

These improvements generalize lessons from Sprint 01 successful execution:
- Better user collaboration through structured testing checkpoints
- Enhanced risk management with proactive identification and mitigation
- Comprehensive quality assurance across multiple levels
- Systematic knowledge capture and process optimization
- Clear scope management and change control procedures

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
2025-09-14 22:03:23 +02:00
parent 29e0fe5334
commit 397b4845d6
3 changed files with 1119 additions and 0 deletions

View File

@@ -0,0 +1,232 @@
# 📑 Sprint Playbook Template (Improved)
## 0. Sprint Status
```
Status: [🔲 not started | 🚧 in progress | 🛠️ implementing <user story id> | ✅ completed | 🚫 blocked]
```
**Current Focus**: [Brief description of what's actively being worked on]
---
## 1. Sprint Metadata
* **Sprint ID:** [unique identifier]
* **Start Date:** [YYYY-MM-DD]
* **End Date:** [YYYY-MM-DD]
* **Sprint Goal:** [clear and concise goal statement]
* **Team/Agent Responsible:** [AI agent name/version]
* **Branch Name (Git):** [feature/sprint-<id>-<short-description>]
* **Estimated Complexity:** [Simple | Medium | Complex]
* **Dependencies:** [List any blocking dependencies identified up front]
---
## 2. Current State of Software
*(Comprehensive snapshot of the project before Sprint work begins)*
* **Main Features Available:** [list with brief descriptions]
* **Known Limitations / Issues:** [list with impact assessment]
* **Relevant Files / Modules:** [list with paths and purposes]
* **Environment / Dependencies:** [runtime versions, frameworks, libs with versions]
* **Testing Infrastructure:** [available test frameworks, coverage tools, CI/CD status]
* **Documentation Status:** [current state of docs, known gaps]
---
## 3. Desired State of Software
*(Target state after Sprint is complete)*
* **New Features:** [list with specific functionality descriptions]
* **Modified Features:** [list with before/after behavior changes]
* **Expected Behavior Changes:** [user-visible and system-level changes]
* **External Dependencies / Integrations:** [new libraries, APIs, services with versions]
* **Performance Expectations:** [any performance requirements or improvements]
* **Security Considerations:** [security implications or requirements]
---
## 4. User Stories
Each story represents a **unit of work** that can be developed, tested, and verified independently.
| Story ID | Title | Description | Acceptance Criteria | Definition of Done | Assignee | Status | Est. Time | Dependencies |
| -------- | ----- | ----------- | ------------------- | ------------------ | -------- | ------ | --------- | ------------ |
| US-1 | [short title] | [detailed description of functionality] | [specific, testable conditions] | [implemented, tested, docs updated, lint clean, **user testing completed**] | [AI agent] | 🔲 todo | [hours] | [story IDs] |
| US-2 | ... | ... | ... | ... | ... | 🔲 todo | [hours] | [story IDs] |
**Status options:** `🔲 todo`, `🚧 in progress`, `🚫 blocked`, `✅ done`
**Story Priority Matrix:**
- **Must Have**: Core functionality required for sprint success
- **Should Have**: Important features that add significant value
- **Could Have**: Nice-to-have features if time permits
- **Won't Have**: Explicitly out of scope for this sprint
---
## 5. Technical Instructions
*(Comprehensive guidance to help AI converge quickly on the correct solution)*
* **Code Snippets / Patterns:**
```typescript
// Example pattern with actual syntax from project
export const exampleFunction = async (params: ExampleType): Promise<ResultType> => {
// Implementation pattern following project conventions
};
```
* **Architecture Guidelines:**
- [layering principles, module boundaries, design patterns]
- [data flow patterns, state management approaches]
- [error handling conventions, logging patterns]
* **Coding Style Conventions:**
- [naming rules: camelCase, PascalCase, kebab-case usage]
- [formatting: prettier, eslint rules]
- [file organization, import/export patterns]
* **Testing Strategy:**
- [unit/integration/e2e testing approach]
- [testing framework and utilities to use]
- [coverage targets and quality gates]
- [manual testing checkpoints and user validation requirements]
* **Internationalization (i18n):**
- [translation key patterns and placement]
- [supported locales and fallback strategies]
- [client vs server-side translation approaches]
* **Performance Considerations:**
- [bundle size targets, lazy loading strategies]
- [database query optimization patterns]
- [caching strategies and invalidation]
---
## 6. Risks and Dependencies
* **Technical Risks:**
- [API compatibility issues, framework limitations]
- [Performance bottlenecks, scalability concerns]
- [Browser compatibility, device-specific issues]
* **Integration Risks:**
- [Third-party service dependencies]
- [Database migration or schema change needs]
- [Authentication/authorization complexity]
* **Timeline Risks:**
- [Unknown complexity areas]
- [Potential scope creep triggers]
- [External dependency availability]
* **Dependencies:**
- [other modules, external services, libraries]
- [team inputs, design assets, API documentation]
- [infrastructure or deployment requirements]
* **Mitigation Strategies:**
- [fallback approaches for high-risk items]
- [spike work to reduce uncertainty]
- [simplified alternatives if main approach fails]
---
## 7. Quality Gates
* **Code Quality:**
- [ ] All code follows project style guidelines
- [ ] No linting errors or warnings
- [ ] Code compiles without errors
- [ ] No security vulnerabilities introduced
* **Testing Quality:**
- [ ] Unit tests cover new functionality
- [ ] Integration points are tested
- [ ] Manual testing completed by user
- [ ] Regression testing passed
* **Documentation Quality:**
- [ ] Code comments added/updated
- [ ] README or API docs updated
- [ ] User-facing documentation updated
- [ ] Technical decisions documented
---
## 8. Sprint Definition of Done (DoD)
The Sprint is complete when:
**AI-Responsible Items** (AI agent can verify and tick):
* [ ] All user stories meet their individual Definition of Done
* [ ] All quality gates passed
* [ ] Code compiles and passes automated tests
* [ ] Code formatting validated (npm run prettier:check)
* [ ] Code is committed and pushed on branch `feature/sprint-<id>`
* [ ] Documentation is updated
* [ ] Sprint status updated to `✅ completed`
* [ ] No critical bugs or blockers remain
* [ ] Performance meets specified requirements
* [ ] Security review completed (if applicable)
**User-Only Items** (Only user can verify and tick):
* [ ] Branch is merged into main
* [ ] User acceptance testing completed
* [ ] Production deployment completed (if applicable)
* [ ] External system integrations verified (if applicable)
* [ ] Stakeholder sign-off received
* [ ] Performance validated in production environment
**Success Metrics:**
* [ ] Sprint goal achieved
* [ ] All must-have stories completed
* [ ] No regression bugs introduced
* [ ] User satisfaction with delivered functionality
---
## 9. Lessons Learned & Retrospective
*(To be filled during/after sprint execution)*
**What Went Well:**
- [successes, good decisions, effective processes]
**What Could Be Improved:**
- [challenges faced, inefficiencies, areas for optimization]
**Action Items for Future Sprints:**
- [specific improvements to implement next time]
**Technical Debt Created:**
- [shortcuts taken that need future attention]
**Knowledge Gained:**
- [new learnings about technology, domain, or processes]
---
## 10. Communication & Coordination
**Stakeholder Updates:**
- [frequency and format of progress updates]
- [key decision points requiring user input]
**Testing Coordination:**
- [when to request user testing]
- [what specific scenarios to test]
- [how to report and track issues]
**Blocker Escalation:**
- [how to handle technical blockers]
- [when to pause vs. continue with alternative approaches]
- [communication protocol for critical issues]
---