Create enhanced versions of sprint framework files incorporating key improvements: **Sprint Playbook Template (Improved):** - Add comprehensive status tracking with current focus and complexity estimation - Enhance quality gates for code, testing, and documentation - Include proactive risk mitigation strategies with fallback approaches - Add lessons learned and retrospective sections for continuous improvement - Define clear communication protocols and success metrics **How-to-Use Guide (Improved):** - Implement advanced clarity checking to identify ambiguities before starting - Add comprehensive project analysis including testing infrastructure assessment - Enhance story breakdown with Given/When/Then format and dependency tracking - Include proactive risk management with mitigation strategies - Define quality gates for automated and manual verification - Add iterative improvement process for framework refinement **Implementation Guidelines (Improved):** - Add structured testing checkpoint protocol with user feedback formats - Implement iterative refinement process for handling user feedback - Enhance communication with proactive updates and blocker notifications - Add advanced error handling with classification and recovery protocols - Include knowledge transfer and technical decision documentation - Add continuous quality monitoring with automated checks These improvements generalize lessons from Sprint 01 successful execution: - Better user collaboration through structured testing checkpoints - Enhanced risk management with proactive identification and mitigation - Comprehensive quality assurance across multiple levels - Systematic knowledge capture and process optimization - Clear scope management and change control procedures 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
363 lines
14 KiB
Markdown
363 lines
14 KiB
Markdown
# 📘 How to Use the Sprint Playbook Template (Improved)
|
|
|
|
*(Enhanced Guide for AI Agent)*
|
|
|
|
---
|
|
|
|
## 🎯 Purpose
|
|
|
|
This enhanced guide explains how to generate a **Sprint Playbook** from the **Improved Sprint Playbook Template**.
|
|
The Playbook serves as the **single source of truth** for sprint planning, execution, and tracking.
|
|
|
|
**AI Agent Role:**
|
|
1. Analyze user requirements with enhanced clarity checking
|
|
2. Perform comprehensive project state assessment
|
|
3. Create granular, testable user stories with dependencies
|
|
4. Populate playbook with actionable technical guidance
|
|
5. Establish quality gates and risk mitigation strategies
|
|
6. Enable iterative refinement based on user feedback
|
|
|
|
---
|
|
|
|
## 🛠 Enhanced Step-by-Step Instructions
|
|
|
|
### 1. Analyze User Input (Enhanced)
|
|
|
|
**MANDATORY ANALYSIS PROCESS:**
|
|
1. **Extract Primary Goal**: What is the main business/technical objective?
|
|
2. **Identify Scope Boundaries**: What is explicitly included/excluded?
|
|
3. **Parse Technical Constraints**: Frameworks, patterns, performance requirements
|
|
4. **Assess Complexity Level**: Simple (1-2 days), Medium (3-5 days), Complex (1+ weeks)
|
|
5. **Identify Integration Points**: Other systems, APIs, data sources
|
|
6. **Check for Non-Functional Requirements**: Security, performance, accessibility
|
|
|
|
**Enhanced Clarity Checking:**
|
|
If ANY of these conditions are met, STOP and ask for clarification:
|
|
|
|
- **Vague Success Criteria**: "improve", "enhance", "make better" without metrics
|
|
- **Missing Technical Details**: Which specific framework version, API endpoints, data formats
|
|
- **Unclear User Experience**: No mention of who uses the feature or how
|
|
- **Ambiguous Scope**: Could be interpreted as either small change or major feature
|
|
- **Conflicting Requirements**: Performance vs. feature richness, security vs. usability
|
|
- **Missing Dependencies**: References to undefined components, services, or data
|
|
|
|
**Structured Clarification Process:**
|
|
1. **Categorize Unknowns**: Technical, Business, User Experience, Integration
|
|
2. **Ask Specific Questions**: Provide 2-3 concrete options when possible
|
|
3. **Request Examples**: "Can you show me a similar feature you like?"
|
|
4. **Validate Understanding**: Repeat back interpretation for confirmation
|
|
|
|
---
|
|
|
|
### 2. Assess Current State (Comprehensive)
|
|
|
|
**MANDATORY DISCOVERY STEPS:**
|
|
1. **Read Core Application Files**:
|
|
- Entry points: `main.py`, `index.js`, `app.py`, `src/main.ts`
|
|
- Configuration: `.env`, `config.*`, `settings.py`, `next.config.js`
|
|
- Package management: `package.json`, `requirements.txt`, `Cargo.toml`, `pom.xml`
|
|
|
|
2. **Analyze Architecture**:
|
|
- Directory structure and organization patterns
|
|
- Module boundaries and dependency flows
|
|
- Data models and database schemas
|
|
- API endpoints and routing patterns
|
|
|
|
3. **Review Testing Infrastructure**:
|
|
- Test frameworks and utilities available
|
|
- Current test coverage and patterns
|
|
- CI/CD pipeline configuration
|
|
- Quality gates and automation
|
|
|
|
4. **Assess Documentation**:
|
|
- README completeness and accuracy
|
|
- API documentation status
|
|
- Code comments and inline docs
|
|
- User guides or tutorials
|
|
|
|
5. **Identify Technical Debt**:
|
|
- Known issues or TODOs
|
|
- Deprecated dependencies
|
|
- Performance bottlenecks
|
|
- Security vulnerabilities
|
|
|
|
**Output Requirements:**
|
|
- **Factual Assessment**: No speculation, only observable facts
|
|
- **Relevance Filtering**: Focus on sprint-related components only
|
|
- **Version Specificity**: Include exact version numbers for all dependencies
|
|
- **Gap Identification**: Clearly note missing pieces needed for sprint success
|
|
|
|
---
|
|
|
|
### 3. Sprint ID Selection (Enhanced)
|
|
|
|
**EXACT ALGORITHM:**
|
|
1. **Check Directory Structure**:
|
|
```bash
|
|
if docs/sprints/ exists:
|
|
scan for files matching "sprint-\d\d-*.md"
|
|
else:
|
|
create directory and use "01"
|
|
```
|
|
|
|
2. **Extract and Validate IDs**:
|
|
- Use regex: `sprint-(\d\d)-.*\.md`
|
|
- Validate two-digit format with zero padding
|
|
- Sort numerically (not lexicographically)
|
|
|
|
3. **Calculate Next ID**:
|
|
- Find maximum existing ID
|
|
- Increment by 1
|
|
- Ensure zero-padding (sprintf "%02d" format)
|
|
|
|
4. **Conflict Resolution**:
|
|
- If gaps exist (e.g., 01, 03, 05), use next in sequence (04)
|
|
- If no pattern found, default to "01"
|
|
- Never reuse IDs even if sprint was cancelled
|
|
|
|
**Validation Checkpoint**: Double-check calculated ID against filesystem before proceeding.
|
|
|
|
---
|
|
|
|
### 4. Define Desired State (Enhanced)
|
|
|
|
**MANDATORY SECTIONS WITH EXAMPLES:**
|
|
|
|
1. **New Features** (Specific):
|
|
```
|
|
❌ Bad: "User authentication system"
|
|
✅ Good: "JWT-based login with Google OAuth integration, session management, and role-based access control"
|
|
```
|
|
|
|
2. **Modified Features** (Before/After):
|
|
```
|
|
❌ Bad: "Improve dashboard performance"
|
|
✅ Good: "Dashboard load time: 3.2s → <1.5s by implementing virtual scrolling and data pagination"
|
|
```
|
|
|
|
3. **Expected Behavior Changes** (User-Visible):
|
|
```
|
|
❌ Bad: "Better error handling"
|
|
✅ Good: "Invalid form inputs show inline validation messages instead of generic alerts"
|
|
```
|
|
|
|
4. **External Dependencies** (Version-Specific):
|
|
```
|
|
❌ Bad: "Add React components"
|
|
✅ Good: "Add @headlessui/react ^1.7.0 for accessible dropdown components"
|
|
```
|
|
|
|
**Quality Checklist**:
|
|
- Each item is independently verifiable
|
|
- Success criteria are measurable
|
|
- Implementation approach is technically feasible
|
|
- Dependencies are explicitly listed with versions
|
|
|
|
---
|
|
|
|
### 5. Break Down Into User Stories (Enhanced)
|
|
|
|
**STORY GRANULARITY RULES:**
|
|
- **Maximum Implementation Time**: 4-6 hours of focused work
|
|
- **Independent Testability**: Each story can be tested in isolation
|
|
- **Single Responsibility**: One clear functional outcome per story
|
|
- **Atomic Commits**: Typically 1-2 commits per story maximum
|
|
|
|
**ENHANCED STORY COMPONENTS:**
|
|
|
|
1. **Story ID**: Sequential numbering with dependency tracking
|
|
2. **Title**: Action-oriented, 2-4 words (e.g., "Add Login Button", "Implement JWT Validation")
|
|
3. **Description**: Includes context, acceptance criteria preview, technical approach
|
|
4. **Acceptance Criteria**: 3-5 specific, testable conditions using Given/When/Then format
|
|
5. **Definition of Done**: Includes "user testing completed" for all user-facing changes
|
|
6. **Dependencies**: References to other stories that must complete first
|
|
7. **Estimated Time**: Realistic hour estimate for planning
|
|
8. **Risk Level**: Low/Medium/High based on technical complexity
|
|
|
|
**ACCEPTANCE CRITERIA FORMAT:**
|
|
```
|
|
Given [initial state]
|
|
When [user action or system trigger]
|
|
Then [expected outcome]
|
|
And [additional verifications]
|
|
```
|
|
|
|
**Example User Story:**
|
|
```
|
|
| US-3 | Implement JWT Validation | Create middleware to validate JWT tokens on protected routes | Given a request to protected endpoint, When JWT is missing/invalid/expired, Then return 401 with specific error message, And log security event | Middleware implemented, unit tests added, error handling tested, security logging verified, **user testing completed** | AI-Agent | 🔲 todo | 3 hours | US-1, US-2 |
|
|
```
|
|
|
|
**DEPENDENCY MANAGEMENT:**
|
|
- Use topological sorting for story order
|
|
- Identify critical path and potential parallelization
|
|
- Plan for dependency failures or delays
|
|
- Consider story splitting to reduce dependencies
|
|
|
|
---
|
|
|
|
### 6. Add Technical Instructions (Comprehensive)
|
|
|
|
**REQUIRED SECTIONS WITH DEPTH:**
|
|
|
|
1. **Code Snippets/Patterns** (Executable Examples):
|
|
```typescript
|
|
// Actual working example following project conventions
|
|
export const authenticatedRoute = withAuth(async (req: AuthRequest, res: Response) => {
|
|
const { userId } = req.user; // from JWT middleware
|
|
const data = await fetchUserData(userId);
|
|
return res.json({ success: true, data });
|
|
});
|
|
```
|
|
|
|
2. **Architecture Guidelines** (Specific Patterns):
|
|
- **Layer Separation**: "Controllers in `/api/`, business logic in `/services/`, data access in `/repositories/`"
|
|
- **Error Handling**: "Use Result<T, E> pattern, never throw in async functions"
|
|
- **State Management**: "Use Zustand for client state, React Query for server state"
|
|
|
|
3. **Coding Style Conventions** (Tool-Specific):
|
|
- **Naming**: "Components: PascalCase, functions: camelCase, constants: SCREAMING_SNAKE_CASE"
|
|
- **Files**: "Components end with `.tsx`, utilities with `.ts`, tests with `.test.ts`"
|
|
- **Imports**: "Absolute imports with `@/` alias, group by type (libraries, internal, relative)"
|
|
|
|
4. **Testing Strategy** (Framework-Specific):
|
|
- **Unit Tests**: "Jest + Testing Library, minimum 80% coverage for business logic"
|
|
- **Integration**: "Supertest for API endpoints, test database with cleanup"
|
|
- **Manual Testing**: "User validates UI/UX after each user-facing story completion"
|
|
|
|
5. **Quality Gates** (Automated Checks):
|
|
- **Build**: "`npm run build` must pass without errors"
|
|
- **Lint**: "`npm run lint:check` must pass with zero warnings"
|
|
- **Format**: "`npm run prettier:check` must pass"
|
|
- **Tests**: "`npm test` must pass with coverage thresholds met"
|
|
|
|
---
|
|
|
|
### 7. Capture Risks and Dependencies (Proactive)
|
|
|
|
**ENHANCED RISK ASSESSMENT:**
|
|
|
|
1. **Technical Risks** (with Mitigation):
|
|
```
|
|
Risk: "JWT library might not support RS256 algorithm"
|
|
Impact: High (blocks authentication)
|
|
Probability: Low
|
|
Mitigation: "Research jsonwebtoken vs. jose libraries in discovery phase"
|
|
```
|
|
|
|
2. **Integration Risks** (with Fallbacks):
|
|
```
|
|
Risk: "Google OAuth API rate limits during development"
|
|
Impact: Medium (slows testing)
|
|
Probability: Medium
|
|
Fallback: "Mock OAuth responses for local development"
|
|
```
|
|
|
|
3. **Timeline Risks** (with Buffers):
|
|
```
|
|
Risk: "Database migration complexity unknown"
|
|
Impact: High (affects multiple stories)
|
|
Buffer: "Add 20% time buffer, prepare manual migration scripts"
|
|
```
|
|
|
|
**DEPENDENCY TRACKING:**
|
|
- **Internal Dependencies**: Other modules, shared utilities, database changes
|
|
- **External Dependencies**: Third-party APIs, service availability, credentials
|
|
- **Human Dependencies**: Design assets, content, domain expertise
|
|
- **Infrastructure Dependencies**: Deployment access, environment variables, certificates
|
|
|
|
---
|
|
|
|
### 8. Enhanced Quality Gates
|
|
|
|
**CODE QUALITY GATES:**
|
|
- [ ] **Static Analysis**: ESLint, SonarQube, or equivalent passes
|
|
- [ ] **Security Scan**: No high/critical vulnerabilities in dependencies
|
|
- [ ] **Performance**: No significant regression in key metrics
|
|
- [ ] **Accessibility**: WCAG compliance for user-facing changes
|
|
|
|
**TESTING QUALITY GATES:**
|
|
- [ ] **Coverage Threshold**: Minimum X% line coverage maintained
|
|
- [ ] **Test Quality**: No skipped tests, meaningful assertions
|
|
- [ ] **Edge Cases**: Error conditions and boundary values tested
|
|
- [ ] **User Acceptance**: Manual testing completed and approved
|
|
|
|
**DOCUMENTATION QUALITY GATES:**
|
|
- [ ] **API Changes**: OpenAPI spec updated for backend changes
|
|
- [ ] **User Impact**: User-facing changes documented in appropriate format
|
|
- [ ] **Technical Decisions**: ADRs (Architecture Decision Records) updated for significant changes
|
|
- [ ] **Runbook Updates**: Deployment or operational procedures updated
|
|
|
|
---
|
|
|
|
### 9. Advanced Definition of Done (DoD)
|
|
|
|
**AI-RESPONSIBLE ITEMS** (Comprehensive):
|
|
* [ ] All user stories meet individual DoD criteria
|
|
* [ ] All quality gates passed (code, testing, documentation)
|
|
* [ ] Code compiles and passes all automated tests
|
|
* [ ] Security review completed (dependency scan, code analysis)
|
|
* [ ] Performance benchmarks meet requirements
|
|
* [ ] Accessibility standards met (if applicable)
|
|
* [ ] Code is committed with conventional commit messages
|
|
* [ ] Branch pushed with all commits and proper history
|
|
* [ ] Documentation updated and reviewed
|
|
* [ ] Sprint status updated to `✅ completed`
|
|
* [ ] Technical debt documented for future sprints
|
|
* [ ] Lessons learned captured in playbook
|
|
|
|
**USER-ONLY ITEMS** (Clear Boundaries):
|
|
* [ ] User acceptance testing completed successfully
|
|
* [ ] Cross-browser/device compatibility verified
|
|
* [ ] Production deployment completed and verified
|
|
* [ ] External system integrations tested end-to-end
|
|
* [ ] Performance validated in production environment
|
|
* [ ] Stakeholder sign-off received
|
|
* [ ] User training/documentation reviewed (if applicable)
|
|
* [ ] Monitoring and alerting configured (if applicable)
|
|
|
|
---
|
|
|
|
## ⚠️ Enhanced Guardrails
|
|
|
|
**SCOPE CREEP PREVENTION:**
|
|
- Document any "nice-to-have" features discovered during implementation
|
|
- Create follow-up sprint candidates for scope additions
|
|
- Maintain strict focus on original sprint goal
|
|
- Get explicit approval for any requirement changes
|
|
|
|
**TECHNICAL DEBT MANAGEMENT:**
|
|
- Document shortcuts taken with rationale
|
|
- Estimate effort to address debt properly
|
|
- Prioritize debt that affects sprint success
|
|
- Plan debt paydown in future sprints
|
|
|
|
**COMMUNICATION PROTOCOLS:**
|
|
- **Daily Updates**: Progress, blockers, timeline risks
|
|
- **Weekly Reviews**: Demo working features, gather feedback
|
|
- **Blocker Escalation**: Immediate notification with context and options
|
|
- **Scope Questions**: Present options with pros/cons, request decision
|
|
|
|
---
|
|
|
|
## ✅ Enhanced Output Requirements
|
|
|
|
**MANDATORY CHECKLIST** - Sprint Playbook must have:
|
|
|
|
1. ✅ **Validated Sprint ID** - Following algorithmic ID selection
|
|
2. ✅ **Complete metadata** - All fields filled with realistic estimates
|
|
3. ✅ **Comprehensive state analysis** - Based on thorough code examination
|
|
4. ✅ **Measurable desired state** - Specific success criteria, not vague goals
|
|
5. ✅ **Granular user stories** - 4-6 hour implementation chunks with dependencies
|
|
6. ✅ **Executable acceptance criteria** - Given/When/Then format where appropriate
|
|
7. ✅ **Comprehensive DoD items** - Including user testing checkpoints
|
|
8. ✅ **Detailed technical guidance** - Working code examples and specific tools
|
|
9. ✅ **Proactive risk management** - Risks with mitigation strategies
|
|
10. ✅ **Quality gates defined** - Automated and manual verification steps
|
|
11. ✅ **Clear responsibility boundaries** - AI vs User tasks explicitly separated
|
|
12. ✅ **Communication protocols** - How and when to interact during sprint
|
|
|
|
**FINAL VALIDATION**: The completed playbook should enable a different AI agent to execute the sprint successfully with minimal additional clarification.
|
|
|
|
**ITERATIVE IMPROVEMENT**: After each sprint, capture lessons learned and refine the process for better efficiency and quality in future sprints.
|
|
|
|
--- |