← Back to Story Hub

The Sensory Organs

Integrating the Model Context Protocol (MCP) to break the Swarm out of the sandbox.

In the early days of the IPTV Proxy project, our agent Swarm was incredibly productive, but they suffered from a fatal flaw: Blindness.

The orchestrator.py daemon was essentially a prison guard. It would find a markdown file in the active/ folder, slide it under the door of the Agent's cell via a CLI prompt, wait for the agent to finish modifying code, and then lock the door again.

If Henry (the PM Agent) needed to prioritize tickets, he couldn't just "look" at the tasks/backlog/ directory. He had no eyes. The Human Architect had to manually run ls or cat, copy the text output from the terminal, and paste it into the Claude chat window. We were acting as a slow, inefficient API between the AI and its own filesystem.

The Enterprise Upgrade: MCP

To solve this, we integrated Anthropic's revolutionary Model Context Protocol (MCP). MCP is an open, standardized protocol that allows AI models to establish secure, two-way connections with local data sources and external APIs natively via standard SSE (Server-Sent Events) or Stdio streams.

Instead of manually passing context back and forth, we built scripts/mcp_server.py. This Python script runs locally and acts as the "Sensory Organs" for the Swarm. When an MCP-compatible client connects to this server, the agent instantly gains native superpowers.

"We gave the Swarm eyes to see the project state, and hands to trigger backend python endpoints without human intermediation."

How It Works: Resources vs. Tools

Our custom MCP server splits the agentic interaction into two distinct paradigms defined by the official MCP specification:

1. Resources (Read-Only Sight)
A Resource is a data stream. We exposed our physical filesystem state machine (tasks/active/, tasks/backlog/, etc.) as MCP Resources using custom URI schemas. Now, Henry doesn't need to ask the human what tasks are active. Claude simply requests to read the tasks://active resource, and the Python daemon feeds the exact state of the directory straight into the agent's context window. It's like the agent simply turning its head to look at a whiteboard.

# Inside scripts/mcp_server.py @mcp.resource("tasks://{stage}") def get_tasks_in_stage(stage: str) -> str: # Claude can now natively stream the contents of # tasks/backlog/, tasks/active/, etc., straight to its brain. directory = os.path.join(PROJECT_ROOT, "tasks", stage) # Returns formatted string of all markdown files

2. Tools (Executable Hands)
While Resources grant sight, Tools grant manipulation. A Tool is an actionable function that the AI model can trigger. We exposed the generate_veo_video.py script, as well as the new Swarm Brain RAG engine, as MCP Tools. Now, an agent can autonomously decide to generate a cinematic video of a new feature, or query its past memories, without the human ever having to run the Python scripts manually.

# Inside scripts/mcp_server.py @mcp.tool() def generate_video(prompt: str) -> str: # Triggers the Veo API subprocess directly subprocess.run(["python", "generate_veo_video.py", prompt])

Practical Usage: Wiring the Client to the Swarm

The MCP server is live right now inside the repository, but it requires a "Client" to connect to it. Depending on whether you use the official Claude Desktop Application or the advanced terminal Claude Code CLI, the setup differs slightly.

Option A: The Claude Code CLI (Recommended)

If you are using the terminal-based Claude CLI (which you are using right now), configuration is instant. You do not need to edit any JSON files. Simply run this command in your terminal:

claude mcp add iptv-swarm python "C:\Clarkyboy-Lair\scripts\mcp_server.py"

Testing it: Once added, the Claude CLI will natively understand the server without restarts. You can test it by simply typing into your terminal prompt right now: "Use your MCP tools to query the swarm brain for stream buffering." or "Read the tasks://backlog resource."

Option B: Claude Desktop (GUI)

Step 1: Locate the Config
On Windows, navigate to your AppData roaming profile: %APPDATA%\Claude\claude_desktop_config.json.

Step 2: Inject the Server Binding
Open the JSON file and add the IPTV Swarm server configuration. This tells Claude Desktop to execute our Python daemon securely over stdio when the app boots.

{ "mcpServers": { "iptv-swarm": { "command": "python", "args": [ "C:/Clarkyboy-Lair/scripts/mcp_server.py" ] } } }

Step 3: Verification
Restart your Claude Desktop application. At the bottom of the input window, you should see a new "Hammer" icon (Tools). If you click it, you will see query_swarm_brain and generate_video populating natively inside the Claude GUI.

You can now simply type into Claude Desktop: "Hey, what tasks are sitting in tasks://backlog?" and Claude will autonomously reach out, read the resource, and answer you without you touching the terminal.

Option C: VS Code AI Extensions (Cline, Roo, OpenCode)

If you prefer using AI coding extensions inside VS Code, you can also give them native access to the Swarm Brain.

  1. Open the extension's side panel and click the MCP Servers icon (usually a plug icon at the bottom of the chat).
  2. Click the button to edit the MCP settings (this opens cline_mcp_settings.json or a similar configuration file).
  3. Paste the exact same JSON configuration block from Option B above into the file and save it.
  4. Click the circular refresh icon next to the new iptv-swarm server in the extension's GUI. You will immediately see the tools register!

The Result

By wiring the project's internal physics to the official MCP standard, we drastically reduced copy-paste hallucination loops. The Swarm is no longer trapped in a text box; it is physically tethered to the repository's heartbeat. They can see. They can act.