If you can create folders and write markdown, you can build the first version of this system. The hard part is not coding. The hard part is choosing constraints and still respecting them when urgency kicks the door in.
This guide is intentionally opinionated. It prioritizes deterministic execution over novelty. The objective is simple: within one AI sprint, you should have one full feature cycle that can be replayed, audited, and improved.
Start with folders, not prompts. These directories are your process truth:
tasks/ backlog/ active/ review/ human-review/ done/ docs/ PROJECT_CONTEXT.md ROADMAP.md
This gives you explicit lifecycle state. No one has to infer progress from chat logs. They can see where each task lives.
Each task file should include:
This is a sanitized version of the exact template style we use in production:
--- status: backlog assigned_agent: full_stack_single_agent required_role: full_stack_single_agent branch: feature/TASK-XYZ-short-title last_completed_step: "" current_working_step: "" qa_fail_count: 0 --- # Task: [SHORT_TITLE] ## Scope - Files to modify: src/example.py, static/js/example.js - Files to read: server.py, ui.py ## Acceptance Criteria - [ ] Criterion one - [ ] Criterion two - [ ] Criterion three ## Test Commands python -m unittest discover -s tests -p "test_*.py"
Rule: no code changes begin without a task file in the active stage.
In a small team, one person may wear multiple hats. That is fine. But the decision mode must still change by stage:
You do not need orchestration complexity on day one. Add only what protects consistency and prevents \"I thought you did that\" moments:
Automation should remove clerical errors, not hide process state.
Sanitized coding and QA prompt patterns can be this direct:
# Coder Prompt Core 1. Read task file and required context files. 2. Implement only in-scope changes. 3. Run required tests. 4. Update task status to in_review. 5. Move task file to review stage. # QA Prompt Core 1. Re-run acceptance criteria independently. 2. Verify scope boundaries. 3. If pass: move to human-review. 4. If fail: return to active with explicit rework notes.
Choose one small but real feature. Move it through all stages. Do not skip QA. Do not skip human review. The first completed cycle is your baseline for future optimization.
After your first successful AI sprint cycle, tune these in order:
One advanced upgrade: add a promotion script so drafts can move from private preview to public pages with a single audited command.
The point is not to copy this system exactly. The point is to adopt the principles: explicit state, constrained execution, independent verification, and clear ownership at release time.