Restructured the repository into a monorepo to better organize application code and maintenance scripts. ## Workspace Structure - web-app: Next.js application (all app code moved from root) - housekeeping: Database backup and maintenance scripts ## Key Changes - Moved all application code to web-app/ using git mv - Moved database scripts to housekeeping/ workspace - Updated Dockerfile for monorepo build process - Updated docker-compose files (volume paths: ./web-app/etc/hosts/) - Updated .gitignore for workspace-level node_modules - Updated documentation (README.md, CLAUDE.md, CHANGELOG.md) ## Migration Impact - Root package.json now manages workspaces - Build commands delegate to web-app workspace - All file history preserved via git mv - Docker build process updated for workspace structure 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
7.4 KiB
📑 Sprint Playbook Template (Improved)
0. Sprint Status
Status: [🔲 not started | 🚧 in progress | 🛠️ implementing <user story id> | ✅ completed | 🚫 blocked]
Current Focus: [Brief description of what's actively being worked on]
1. Sprint Metadata
- Sprint ID: [unique identifier]
- Start Date: [YYYY-MM-DD]
- End Date: [YYYY-MM-DD]
- Sprint Goal: [clear and concise goal statement]
- Team/Agent Responsible: [AI agent name/version]
- Branch Name (Git): [feature/sprint--]
- Estimated Complexity: [Simple | Medium | Complex]
- Dependencies: [List any blocking dependencies identified up front]
2. Current State of Software
(Comprehensive snapshot of the project before Sprint work begins)
- Main Features Available: [list with brief descriptions]
- Known Limitations / Issues: [list with impact assessment]
- Relevant Files / Modules: [list with paths and purposes]
- Environment / Dependencies: [runtime versions, frameworks, libs with versions]
- Testing Infrastructure: [available test frameworks, coverage tools, CI/CD status]
- Documentation Status: [current state of docs, known gaps]
3. Desired State of Software
(Target state after Sprint is complete)
- New Features: [list with specific functionality descriptions]
- Modified Features: [list with before/after behavior changes]
- Expected Behavior Changes: [user-visible and system-level changes]
- External Dependencies / Integrations: [new libraries, APIs, services with versions]
- Performance Expectations: [any performance requirements or improvements]
- Security Considerations: [security implications or requirements]
4. User Stories
Each story represents a unit of work that can be developed, tested, and verified independently.
| Story ID | Title | Description | Acceptance Criteria | Definition of Done | Assignee | Status | Est. Time | Dependencies |
|---|---|---|---|---|---|---|---|---|
| US-1 | [short title] | [detailed description of functionality] | [specific, testable conditions] | [implemented, tested, docs updated, lint clean, user testing completed] | [AI agent] | 🔲 todo | [hours] | [story IDs] |
| US-2 | ... | ... | ... | ... | ... | 🔲 todo | [hours] | [story IDs] |
Status options: 🔲 todo, 🚧 in progress, 🚫 blocked, ✅ done
Story Priority Matrix:
- Must Have: Core functionality required for sprint success
- Should Have: Important features that add significant value
- Could Have: Nice-to-have features if time permits
- Won't Have: Explicitly out of scope for this sprint
5. Technical Instructions
(Comprehensive guidance to help AI converge quickly on the correct solution)
-
Code Snippets / Patterns:
// Example pattern with actual syntax from project export const exampleFunction = async (params: ExampleType): Promise<ResultType> => { // Implementation pattern following project conventions }; -
Architecture Guidelines:
- [layering principles, module boundaries, design patterns]
- [data flow patterns, state management approaches]
- [error handling conventions, logging patterns]
-
Coding Style Conventions:
- [naming rules: camelCase, PascalCase, kebab-case usage]
- [formatting: prettier, eslint rules]
- [file organization, import/export patterns]
-
Testing Strategy:
- [unit/integration/e2e testing approach]
- [testing framework and utilities to use]
- [coverage targets and quality gates]
- [manual testing checkpoints and user validation requirements]
-
Internationalization (i18n):
- [translation key patterns and placement]
- [supported locales and fallback strategies]
- [client vs server-side translation approaches]
-
Performance Considerations:
- [bundle size targets, lazy loading strategies]
- [database query optimization patterns]
- [caching strategies and invalidation]
6. Risks and Dependencies
-
Technical Risks:
- [API compatibility issues, framework limitations]
- [Performance bottlenecks, scalability concerns]
- [Browser compatibility, device-specific issues]
-
Integration Risks:
- [Third-party service dependencies]
- [Database migration or schema change needs]
- [Authentication/authorization complexity]
-
Timeline Risks:
- [Unknown complexity areas]
- [Potential scope creep triggers]
- [External dependency availability]
-
Dependencies:
- [other modules, external services, libraries]
- [team inputs, design assets, API documentation]
- [infrastructure or deployment requirements]
-
Mitigation Strategies:
- [fallback approaches for high-risk items]
- [spike work to reduce uncertainty]
- [simplified alternatives if main approach fails]
7. Quality Gates
-
Code Quality:
- All code follows project style guidelines
- No linting errors or warnings
- Code compiles without errors
- No security vulnerabilities introduced
-
Testing Quality:
- Unit tests cover new functionality
- Integration points are tested
- Manual testing completed by user
- Regression testing passed
-
Documentation Quality:
- Code comments added/updated
- README or API docs updated
- User-facing documentation updated
- Technical decisions documented
8. Sprint Definition of Done (DoD)
The Sprint is complete when:
AI-Responsible Items (AI agent can verify and tick):
- All user stories meet their individual Definition of Done
- All quality gates passed
- Code compiles and passes automated tests
- Code formatting validated (npm run prettier:check)
- Code is committed and pushed on branch
feature/sprint-<id> - Documentation is updated
- Sprint status updated to
✅ completed - No critical bugs or blockers remain
- Performance meets specified requirements
- Security review completed (if applicable)
User-Only Items (Only user can verify and tick):
- Branch is merged into main
- User acceptance testing completed
- Production deployment completed (if applicable)
- External system integrations verified (if applicable)
- Stakeholder sign-off received
- Performance validated in production environment
Success Metrics:
- Sprint goal achieved
- All must-have stories completed
- No regression bugs introduced
- User satisfaction with delivered functionality
9. Lessons Learned & Retrospective
(To be filled during/after sprint execution)
What Went Well:
- [successes, good decisions, effective processes]
What Could Be Improved:
- [challenges faced, inefficiencies, areas for optimization]
Action Items for Future Sprints:
- [specific improvements to implement next time]
Technical Debt Created:
- [shortcuts taken that need future attention]
Knowledge Gained:
- [new learnings about technology, domain, or processes]
10. Communication & Coordination
Stakeholder Updates:
- [frequency and format of progress updates]
- [key decision points requiring user input]
Testing Coordination:
- [when to request user testing]
- [what specific scenarios to test]
- [how to report and track issues]
Blocker Escalation:
- [how to handle technical blockers]
- [when to pause vs. continue with alternative approaches]
- [communication protocol for critical issues]