← Back to Preview Index

Build a Filesystem Swarm in One AI Sprint

A practical starter blueprint for teams who want reliable multi-agent delivery in a single AI-run cycle.
System flowchart

If you can create folders and write markdown, you can build the first version of this system. The hard part is not coding. The hard part is choosing constraints and still respecting them when urgency kicks the door in.

This guide is intentionally opinionated. It prioritizes deterministic execution over novelty. The objective is simple: within one AI sprint, you should have one full feature cycle that can be replayed, audited, and improved.

A sprint build is not about completeness. It is about proving the workflow contract under real-world chaos, rapid iterations, and changing priorities.

Step 1: Create a Physical State Machine

Start with folders, not prompts. These directories are your process truth:

tasks/
  backlog/
  active/
  review/
  human-review/
  done/

docs/
  PROJECT_CONTEXT.md
  ROADMAP.md

This gives you explicit lifecycle state. No one has to infer progress from chat logs. They can see where each task lives.

Step 2: Define the Task Contract

Each task file should include:

This is a sanitized version of the exact template style we use in production:

---
status: backlog
assigned_agent: full_stack_single_agent
required_role: full_stack_single_agent
branch: feature/TASK-XYZ-short-title
last_completed_step: ""
current_working_step: ""
qa_fail_count: 0
---

# Task: [SHORT_TITLE]

## Scope
- Files to modify: src/example.py, static/js/example.js
- Files to read: server.py, ui.py

## Acceptance Criteria
- [ ] Criterion one
- [ ] Criterion two
- [ ] Criterion three

## Test Commands
python -m unittest discover -s tests -p "test_*.py"

Rule: no code changes begin without a task file in the active stage.

Step 3: Split Roles, Not Just Work

In a small team, one person may wear multiple hats. That is fine. But the decision mode must still change by stage:

Role routing chart
Role clarity is a throughput accelerator, not organizational theater.

Step 4: Add Tiny Automation

You do not need orchestration complexity on day one. Add only what protects consistency and prevents \"I thought you did that\" moments:

Automation should remove clerical errors, not hide process state.

Sanitized coding and QA prompt patterns can be this direct:

# Coder Prompt Core
1. Read task file and required context files.
2. Implement only in-scope changes.
3. Run required tests.
4. Update task status to in_review.
5. Move task file to review stage.

# QA Prompt Core
1. Re-run acceptance criteria independently.
2. Verify scope boundaries.
3. If pass: move to human-review.
4. If fail: return to active with explicit rework notes.

Step 5: Run One End-to-End Feature

Choose one small but real feature. Move it through all stages. Do not skip QA. Do not skip human review. The first completed cycle is your baseline for future optimization.

Operational Checklist for One AI Sprint

Common Mistakes in First Builds

Digital memory visualization
Keep memory in artifacts. Keep decisions in contracts. Keep operations boring.

What to Improve First After Sprint One

After your first successful AI sprint cycle, tune these in order:

One advanced upgrade: add a promotion script so drafts can move from private preview to public pages with a single audited command.

The point is not to copy this system exactly. The point is to adopt the principles: explicit state, constrained execution, independent verification, and clear ownership at release time.

If you can replay how a decision was made, your system is maturing. If you cannot, your system is improvising with production consequences.