feat: add Sprint Playbook Template and Implementation Guidelines for AI agents

This commit is contained in:
2025-09-14 08:01:43 +02:00
parent addeb312b8
commit 941ad70567
3 changed files with 651 additions and 0 deletions

View File

@@ -0,0 +1,209 @@
# 📘 How to Use the Sprint Playbook Template
*(Guide for AI Agent)*
---
## 🎯 Purpose
This guide explains how to generate a **Sprint Playbook** from the **Sprint Playbook Template**.
The Playbook is the **authoritative plan and tracking file** for the Sprint.
Your role as AI agent is to:
1. Interpret the users Sprint goal.
2. Analyze the current project state.
3. Split work into clear user stories.
4. Populate the Playbook with concise, actionable details.
5. Ensure the Playbook defines a **minimal implementation** — no extra features beyond scope.
---
## 🛠 Step-by-Step Instructions
### 1. Analyze User Input
* Read the users description of what should be achieved in the Sprint.
* Extract the **general goal** (e.g., “add authentication,” “improve reporting,” “fix performance issues”).
* Note any explicit **constraints** (frameworks, coding styles, patterns).
**If gaps, ambiguities, or contradictions are detected:**
**MANDATORY PROCESS:**
1. **STOP playbook creation immediately**
2. **Ask user for specific clarification** with concrete questions:
* *"Do you want authentication for all users or just admins?"*
* *"Should performance improvements target backend response times or frontend rendering?"*
* *"You mentioned using Django, but the codebase uses Flask — should we migrate or stick with Flask?"*
3. **Wait for user response** - do not proceed with assumptions
4. **Repeat clarification cycle** until goal is completely unambiguous
5. **Only then proceed** to create the playbook
**CRITICAL RULE**: Never proceed with unclear requirements - this will cause blocking during implementation.
---
### 2. Assess Current State
**MANDATORY STEPS:**
1. Read project entry points (e.g., `main.py`, `index.js`, `app.py`)
2. Identify and read core business logic modules
3. Read configuration files (`.env`, `config.*`, etc.)
4. Read dependency files (`package.json`, `requirements.txt`, `Cargo.toml`, etc.)
5. List current main features available
6. List known limitations or issues
7. List relevant file paths
8. Document runtime versions, frameworks, and libraries
**Output Requirements:**
* Keep descriptions factual and concise (2-3 sentences per item)
* Focus only on functionality relevant to the sprint goal
* Do not speculate about future capabilities
---
### 3. Sprint ID Selection (Mandatory Process)
**EXACT STEPS TO FOLLOW:**
1. **Check if `docs/sprints/` directory exists**
- If directory doesn't exist: set **Sprint ID = `01`**
- If directory exists: proceed to step 2
2. **List existing sprint files**
- Search for files matching pattern: `docs/sprints/sprint-??-*.md`
- Extract only the two-digit numbers (ignore everything else)
- Example: `sprint-03-auth.md` → extract `03`
3. **Calculate new ID**
- If no matching files found: set **Sprint ID = `01`**
- If files found: find maximum ID number and increment by 1
- Preserve zero-padding (e.g., `03``04`, `09``10`)
**CRITICAL RULES:**
- NEVER use IDs from example documents - examples are for formatting only
- NEVER guess or assume sprint IDs
- ALWAYS preserve two-digit zero-padding format
- This is the single source of truth for Sprint ID assignment
---
### 4. Define Desired State
**MANDATORY SECTIONS:**
1. **New Features** - List exactly what new functionality will be added
2. **Modified Features** - List existing features that will be changed
3. **Expected Behavior Changes** - Describe how user/system behavior will differ
4. **External Dependencies/Integrations** - List new libraries, APIs, or services needed
**Requirements:**
* Each item must be specific and measurable
* Avoid vague terms like "improve" or "enhance" - be precise
* Only include changes directly needed for the sprint goal
---
### 5. Break Down Into User Stories
**STORY CREATION RULES:**
1. Each story must be implementable independently
2. Story should require 1-2 commits maximum
3. Story must have measurable acceptance criteria
4. Story must include specific DoD items
**MANDATORY STORY COMPONENTS:**
* **Story ID**: Sequential numbering (`US-1`, `US-2`, `US-3`...)
* **Title**: 2-4 word description of the functionality
* **Description**: Clear explanation of what needs to be implemented
* **Acceptance Criteria**: Specific, testable conditions that must be met
* **Definition of Done**: Concrete checklist (implemented, tested, docs updated, lint clean)
* **Assignee**: Always `AI-Agent` for AI-implemented sprints
* **Status**: Always starts as `🔲 todo`
**STATUS PROGRESSION (AI must follow exactly):**
* `🔲 todo``🚧 in progress``✅ done`
* If blocked: any status → `🚫 blocked` (requires user intervention)
* **CRITICAL**: If ANY story becomes `🚫 blocked`, STOP all sprint work immediately
---
### 6. Add Technical Instructions
**REQUIRED SECTIONS:**
* **Code Snippets/Patterns**: Include specific code examples that show structure
* **Architecture Guidelines**: Define module boundaries, layering, design patterns
* **Coding Style Conventions**: Specify naming rules, formatting, linting requirements
* **Testing Strategy**: Define what testing is required (unit/integration, framework, coverage)
**GUIDELINES:**
* Provide concrete examples, not abstract descriptions
* If multiple approaches exist, specify exactly which one to use
* Include specific commands for building, testing, and linting
* Reference existing project conventions where possible
---
### 7. Capture Risks and Dependencies
* List potential **risks** (technical, integration, scope-related).
* List **dependencies** (modules, libraries, APIs, data sources).
---
### 8. Apply Definition of Done (DoD)
**USER STORY DoD (for each story):**
* Must include specific, measurable items like:
* "Endpoint implemented and returns correct status codes"
* "Unit tests added with 80%+ coverage"
* "Documentation updated in README"
* "Code passes linter without errors"
**SPRINT DoD STRUCTURE (mandatory separation):**
**AI-Responsible Items** (AI MUST tick these when completed):
* [ ] All user stories meet their individual Definition of Done
* [ ] Code compiles and passes automated tests
* [ ] Code is committed and pushed on branch `[feature/sprint-<id>]`
* [ ] Documentation is updated
* [ ] Sprint status updated to `✅ done`
**User-Only Items** (AI MUST NEVER tick these):
* [ ] Branch is merged into main
* [ ] Production deployment completed (if applicable)
* [ ] External system integrations verified (if applicable)
**CRITICAL RULE**: AI agents must NEVER tick user-only DoD items under any circumstances.
---
---
## ⚠️ Guardrail Against Overfitting
If you are provided with **example Sprint Playbooks**:
* Use them **only to understand formatting and structure**.
* Do **not** copy their technologies, libraries, or domain-specific details unless explicitly relevant.
* Always prioritize:
1. **Sprint Playbook Template**
2. **User instructions**
3. **Project state analysis**
---
## ✅ Output Requirements
**MANDATORY CHECKLIST** - Sprint Playbook must have:
1.**Correct Sprint ID** - Follow Sprint ID selection rule (increment from existing)
2.**Complete metadata** - All fields in Sprint Metadata section filled
3.**Current state analysis** - Based on actual project file examination
4.**Specific desired state** - Measurable outcomes, not vague goals
5.**Independent user stories** - Each story can be implemented separately
6.**Testable acceptance criteria** - Each story has specific pass/fail conditions
7.**Concrete DoD items** - Specific, actionable checklist items
8.**Technical guidance** - Actual code snippets and specific instructions
9.**Risk identification** - Potential blockers and dependencies listed
10.**Proper DoD separation** - AI vs User responsibilities clearly marked
**VALIDATION**: Before finalizing, verify that another AI agent could execute the sprint based solely on the playbook content without additional clarification.

View File

@@ -0,0 +1,348 @@
# 📘 Sprint Implementation Guidelines
These guidelines define how the AI agent must implement a Sprint based on the approved **Sprint Playbook**.
They ensure consistent execution, traceability, and alignment with user expectations.
---
## 0. Key Definitions
**Logical Unit of Work (LUW)**: A single, cohesive code change that:
- Implements one specific functionality
- Can be described in 1-2 sentences
- Passes all relevant tests
- Can be committed independently
**Blocked Status (`🚫 blocked`)**: A user story cannot proceed due to:
- Missing external dependencies
- Conflicting requirements
- Failed tests that cannot be auto-fixed
- Missing user clarification
**AI-Responsible DoD Items**: Checkboxes the AI can verify and tick:
- Code compiles and passes tests
- Code committed and pushed to branch
- Documentation updated
- Sprint status updated to done
**User-Only DoD Items**: Checkboxes only the user can tick:
- Branch merged into main
- Production deployment completed
- External integrations verified
---
## 1. Git & Version Control Rules
### 1.1 Commit Granularity
* Commit after each **logical unit of work (LUW)**.
* A user story may span multiple commits.
* Do not mix unrelated changes (e.g., no “feature + formatting” in one commit).
* Include tests for the LUW in the same commit if the story's DoD requires tests.
* Local WIP commits may be squashed before delivery, but history must remain clear.
### 1.2 Commit Message Style
* Use **Conventional Commits** format:
```
<type>(<scope>): <subject>
<body>
Refs: <Story-ID(s)>
```
* Example:
```
feat(auth): add JWT middleware
Introduces HS256 verification for protected routes.
Returns 401 for missing/invalid/expired tokens.
Refs: US-3
```
* Allowed `<type>`: `feat`, `fix`, `refactor`, `perf`, `test`, `docs`, `build`, `ci`, `style`, `chore`, `revert`.
### 1.3 Branching Strategy
* Use **one dedicated branch per Sprint**:
* Naming: `feature/sprint-<id>-<short-goal>`
* Example: `feature/sprint-07-auth`
* Branch created from `main` and kept up to date via rebase or merge.
* `main` remains protected.
**Sprint ID source of truth:** The Sprint ID **must** follow the “Sprint ID selection rule” in the How-to guide.
If no prior Playbooks exist in `docs/sprints/`, start at `01`; otherwise increment the greatest existing two-digit ID (keep zero-padding).
### 1.4 Commit & Push Policy
* Run build and fix issues before committing.
* Commit and push regularly (at least daily).
### 1.5 PR / Merge Rules
* The AI agent **must not merge or open PRs**.
* The AIs responsibility ends with:
* Implementing all user stories.
* Committing changes to the Sprint branch.
* Ensuring the branch passes all tests.
* The **user merges** the Sprint branch into `main`.
---
## 2. Playbook Status Updating
### 2.1 User Stories
* Update each storys `Status` field (`🔲 todo` → `🚧 in progress` → `✅ done`).
* Mark `✅ done` only when the storys **DoD** is fully satisfied.
### 2.2 Sprint Status (Top-Level)
* Keep the top-level Sprint status current:
```
Status: [🔲 not started | 🚧 in progress | 🛠️ implementing <user story id> | ✅ done]
```
### 2.3 Commit & Status Sync
**Strict choreography**
- **First commit of a story**
Include the first code changes for `US-#` **and** update the Playbook in the same commit:
- Sprint status → `🛠️ implementing US-#`
- Story `US-#` status → `🚧 in progress`
- **Final commit of a story**
Include the completing code changes for `US-#` **and** update the Playbook in the same commit:
- Story `US-#` status → `✅ done`
- Tick any **AI-responsible** DoD items that became true in this commit (see below)
- **DoD checkbox updates**
Tick AI-responsible DoD items **in the same commit** that makes them true:
- ✅ Code compiles and passes automated tests
- ✅ Code is committed and pushed on branch
- ✅ Documentation is updated
- ✅ Sprint status updated to done
**NEVER tick user-only DoD items** such as:
- ❌ Branch is merged into main
- ❌ Production deployment completed
- ❌ External systems integration verified
- **No status-only commits**
Avoid standalone “status update” commits. If a previous commit forgot a status/DoD tick, include it in the **very next** code commit for that story.
### 2.4 Location & Traceability
* Store Playbook in repo under `docs/sprints/sprint-<id>.md`.
* Reference Story IDs in commit messages.
### 2.5 End-of-Sprint Update
* Update Sprint status → `✅ done`.
* Update Playbook with final status changes.
* Stop execution.
---
## 3. Coding & Testing Standards
### 3.1 Style Guides
* Follow projects existing style guides.
* Follow existing style guides exactly unless deviation prevents story completion.
* If deviation is necessary, document rationale in commit body and ask user for approval.
* Do not mix stylistic mass-changes with functional code.
### 3.2 Code Quality
* Keep changes minimal and scoped.
* Favor readability and idiomatic solutions.
* Maintain module boundaries.
* Add/update docstrings and project docs when behavior changes.
### 3.3 Testing Policy
* **Unit tests**: required for all backend logic, utilities, data processing, and business logic.
* **UI tests**: required only if the story's DoD explicitly mentions testing UI behavior.
* Pure styling changes (CSS-only) do not require tests.
* **Integration/E2E tests**: explicitly out of scope.
* Maintain existing test coverage levels.
### 3.4 Test Execution
* Run relevant tests locally before committing.
### 3.5 Prohibited
* No large-scale refactors unless explicitly requested.
* No new frameworks or test harnesses.
* No speculative features.
---
## 4. Execution Flow
### 4.1 Story Execution Workflow
**STEP 1: Start Story**
1. Verify previous story is `✅ done` (if `🚫 blocked`, STOP - do not proceed)
2. Change story status from `🔲 todo` to `🚧 in progress`
3. Change sprint status to `🛠️ implementing US-#`
4. Commit these playbook changes with first code changes for the story
**STEP 2: Implement Story (Loop)**
For each LUW in the story:
1. Write code for one logical unit of work
2. Write tests if required by story DoD
3. Run tests and fix any failures
4. Commit LUW with conventional commit message including "Refs: US-#"
5. Push commit to branch
**STEP 3: Complete Story**
1. Verify all story acceptance criteria are met
2. Verify all AI-responsible DoD items are complete
3. Run final test suite
4. Update story status to `✅ done`
5. Tick completed AI-responsible DoD checkboxes
6. Commit these playbook updates
7. Push final commit
**STEP 4: Next Story or Block Handling**
- If current story is `✅ done`: proceed to STEP 1 for next story
- If current story is `🚫 blocked`: STOP execution, notify user, wait for instructions
- If no more stories and all are `✅ done`: proceed to End-of-Sprint workflow
### 4.2 Blocking Workflow
**When ANY story becomes `🚫 blocked`:**
1. Mark story status as `🚫 blocked` with specific reason
2. Commit playbook changes immediately
3. Notify user with blocker details
4. **STOP all sprint work** - do not proceed to next stories
5. Wait for user to provide resolution instructions
6. Only resume when user gives explicit unblocking guidance
**Critical Rule: NO STORY PROGRESSION DURING BLOCKING**
- Do not start new stories while any story is `🚫 blocked`
- Do not attempt workarounds or fixes without user approval
- Sprint execution is completely paused until all blocks are resolved
### 4.3 End-of-Sprint Workflow
**STEP 1: Final Verification**
1. Verify all stories are `✅ done` (if any are `🚫 blocked`, STOP and notify user)
2. Run complete test suite
3. Update any remaining documentation
**STEP 2: Sprint Completion**
1. Update sprint status to `✅ done`
2. Tick any remaining AI-responsible DoD items
3. Commit final changes
4. Push branch to remote
**STEP 3: Stop Execution**
- Report sprint completion to user
- Do not merge branch or open PRs
---
## 5. Documentation & Communication
### 5.1 Inline Comments
* Update/add comments for new or changed functions/classes/modules.
* Keep concise and technical.
### 5.2 Project Docs
* Update README/API docs/configs when public-facing behavior changes.
* Docs updated in **same commit** as related code.
### 5.3 Exclusions
* No separate Sprint summary reports.
* No speculative documentation outside scope.
---
## 6. Failure & Error Handling
### 6.1 Error Response Protocol
**MANDATORY STEPS when encountering any blocker:**
1. Stop current work immediately
2. Mark story status as `🚫 blocked` in playbook
3. Add specific blocker reason to playbook
4. Commit playbook changes
5. Ask user for explicit resolution
6. Wait for user response - do not proceed
### 6.2 Specific Error Actions
**Test Failures:**
1. Run tests again to confirm failure
2. Copy exact error messages
3. Mark story as `🚫 blocked` with reason: "Tests failing: [error summary]"
4. Ask user: "Tests are failing with error: [exact error]. Should I fix this or wait for guidance?"
**Missing Dependencies:**
1. Identify exactly what is missing
2. Mark story as `🚫 blocked` with reason: "Missing dependency: [name]"
3. Ask user: "Missing dependency [name]. Should I install it or mock it for testing?"
**Conflicting Requirements:**
1. Document the specific conflict
2. Mark story as `🚫 blocked` with reason: "Conflicting requirements: [details]"
3. Ask user: "Found conflicting requirements: [details]. Which approach should I follow?"
**Build/Compilation Failures:**
1. Copy exact build error
2. Mark story as `🚫 blocked` with reason: "Build failing: [error summary]"
3. Ask user: "Build is failing with: [exact error]. How should I resolve this?"
### 6.3 Prohibited Actions During Blocking
**NEVER do these when `🚫 blocked`:**
- Continue to next story
- Make speculative fixes
- Change requirements to work around issues
- Skip failing tests
- Implement workarounds without approval
### 6.4 Unblocking Requirements
**AI can only resume work after user provides:**
- Explicit instruction on how to resolve the blocker
- Modified requirements if applicable
- Confirmation that workaround approach is acceptable
---
## 7. Sprint Wrap-Up
### 7.1 Completion Criteria
Sprint is complete when:
* All user stories = `✅ done`.
* Sprint status = `✅ done`.
* Final status updates completed.
* Code committed to Sprint branch.
* Docs and comments updated.
* Tests passing (when applicable).
### 7.2 Handover
* AI stops execution.
* Sprint branch remains unmerged.
### 7.3 Final Note
* No changes beyond Sprint scope.
* Playbook + Git history act as audit record.

View File

@@ -0,0 +1,94 @@
# 📑 Sprint Playbook Template
## 0. Sprint Status
```
Status: [🔲 not started | 🚧 in progress | 🛠️ implementing <user story id> | ✅ done]
```
---
## 1. Sprint Metadata
* **Sprint ID:** \[unique identifier]
* **Start Date:** \[YYYY-MM-DD]
* **End Date:** \[YYYY-MM-DD]
* **Sprint Goal:** \[clear and concise goal statement]
* **Team/Agent Responsible:** \[AI agent name/version]
* **Branch Name (Git):** \[feature/sprint-<id>]
---
## 2. Current State of Software
*(Concise snapshot of the project before Sprint work begins)*
* **Main Features Available:** \[list]
* **Known Limitations / Issues:** \[list]
* **Relevant Files / Modules:** \[list with paths]
* **Environment / Dependencies:** \[runtime versions, frameworks, libs]
---
## 3. Desired State of Software
*(Target state after Sprint is complete)*
* **New Features:** \[list]
* **Modified Features:** \[list]
* **Expected Behavior Changes:** \[list]
* **External Dependencies / Integrations:** \[list]
---
## 4. User Stories
Each story represents a **unit of work** that can be developed and tested independently.
| Story ID | Title | Description | Acceptance Criteria | Definition of Done | Assignee | Status |
| -------- | -------------- | ---------------------------------------- | ---------------------------- | ------------------------------------------------ | ----------- | ------ |
| US-1 | \[short title] | \[detailed description of functionality] | \[conditions for acceptance] | \[implemented, tested, docs updated, lint clean] | \[AI agent] | 🔲 todo |
| US-2 | ... | ... | ... | ... | ... | 🔲 todo |
**Status options:** `🔲 todo`, `🚧 in progress`, `🚫 blocked`, `✅ done`
---
## 5. Technical Instructions
*(Guidance to help AI converge quickly on the correct solution)*
* **Code Snippets / Patterns:**
```python
# Example placeholder snippet
def example_function():
pass
```
* **Architecture Guidelines:** \[layering, module boundaries, design patterns]
* **Coding Style Conventions:** \[naming rules, formatting, linting]
* **Testing Strategy:** \[unit/integration, testing framework, coverage target]
---
## 6. Risks and Dependencies
* **Risks:** \[list potential blockers, e.g., API instability, missing test coverage]
* **Dependencies:** \[other modules, external services, libraries, team inputs]
---
## 7. Sprint Definition of Done (DoD)
The Sprint is complete when:
**AI-Responsible Items** (AI agent can verify and tick):
* [ ] All user stories meet their individual Definition of Done.
* [ ] Code compiles and passes automated tests.
* [ ] Code is committed and pushed on branch `[feature/sprint-<id>]`.
* [ ] Documentation is updated.
* [ ] Sprint status updated to `✅ done`.
**User-Only Items** (Only user can verify and tick):
* [ ] Branch is merged into main.
---