In the early days of the IPTV Proxy project, our agent Swarm was incredibly productive, but they suffered from a fatal flaw: Blindness.
The orchestrator.py daemon was essentially a prison guard. It would find a markdown file in the
active/ folder, slide it under the door of the Agent's cell via a CLI prompt, wait for the
agent to finish modifying code, and then lock the door again.
If Henry (the PM Agent) needed to prioritize tickets, he couldn't just "look" at the
tasks/backlog/ directory. He had no eyes. The Human
Architect had to manually run ls or cat, copy the text output from the terminal,
and paste it into the
Claude chat window. We were acting as a slow, inefficient API between the AI and its own filesystem.
To solve this, we integrated Anthropic's revolutionary Model Context Protocol (MCP). MCP is an open, standardized protocol that allows AI models to establish secure, two-way connections with local data sources and external APIs natively via standard SSE (Server-Sent Events) or Stdio streams.
Instead of manually passing context back and forth, we built scripts/mcp_server.py. This Python
script runs
locally and acts as the "Sensory Organs" for the Swarm. When an MCP-compatible client connects to this
server, the agent instantly gains native superpowers.
Our custom MCP server splits the agentic interaction into two distinct paradigms defined by the official MCP specification:
1. Resources (Read-Only Sight)
A Resource is a data stream. We exposed our physical filesystem state machine (tasks/active/,
tasks/backlog/,
etc.) as MCP Resources using custom URI schemas. Now, Henry doesn't need to ask the human what tasks are
active. Claude simply requests to read the tasks://active resource, and the Python daemon feeds
the exact state of the directory straight into the agent's context window. It's like the agent simply
turning its head to look at a whiteboard.
2. Tools (Executable Hands)
While Resources grant sight, Tools grant manipulation. A Tool is an actionable function that the AI model
can trigger. We exposed the generate_veo_video.py
script, as well as the new Swarm Brain RAG engine, as MCP Tools. Now, an agent can autonomously decide to
generate a cinematic video of a new feature, or query its past memories, without the human ever having to
run the Python scripts manually.
The MCP server is live right now inside the repository, but it requires a "Client" to connect to it. Depending on whether you use the official Claude Desktop Application or the advanced terminal Claude Code CLI, the setup differs slightly.
If you are using the terminal-based Claude CLI (which you are using right now), configuration is instant. You do not need to edit any JSON files. Simply run this command in your terminal:
Testing it: Once added, the Claude CLI will natively understand the server without restarts. You can test it by simply typing into your terminal prompt right now: "Use your MCP tools to query the swarm brain for stream buffering." or "Read the tasks://backlog resource."
Step 1: Locate the Config
On Windows, navigate to your AppData roaming profile:
%APPDATA%\Claude\claude_desktop_config.json.
Step 2: Inject the Server Binding
Open the JSON file and add the IPTV Swarm server configuration. This tells Claude Desktop to execute our
Python daemon securely over stdio when the app boots.
Step 3: Verification
Restart your Claude Desktop application. At the bottom of the input window, you should see a new "Hammer"
icon (Tools). If you click it, you will see query_swarm_brain and generate_video
populating natively inside the Claude GUI.
You can now simply type into Claude Desktop: "Hey, what tasks are sitting in tasks://backlog?" and Claude will autonomously reach out, read the resource, and answer you without you touching the terminal.
If you prefer using AI coding extensions inside VS Code, you can also give them native access to the Swarm Brain.
cline_mcp_settings.json or a similar
configuration file).iptv-swarm server in the extension's GUI.
You will immediately see the tools register!By wiring the project's internal physics to the official MCP standard, we drastically reduced copy-paste hallucination loops. The Swarm is no longer trapped in a text box; it is physically tethered to the repository's heartbeat. They can see. They can act.