
Hivemind -- Multi-Agent Orchestrator
SkillSkill
Your orchestrator that coordinates agent swarms with task decomposition and consensus protocols -- agents working together.
About
name: hivemind description: > Coordinate multiple AI agents with swarm intelligence, supervisor patterns, and task decomposition. USE WHEN: User needs multi-agent orchestration, fan-out/fan-in workflows, agent delegation, consensus mechanisms, or collaborative AI pipelines. DON'T USE WHEN: User needs a single autonomous agent (use Phantom), persistent memory (use Recall), or MCP tooling (use Switchblade). OUTPUTS: Multi-agent architectures, supervisor configurations, communication protocols, task decomposition strategies, consensus patterns, pipeline designs. version: 1.0.0 author: SpookyJuice tags: [multi-agent, orchestration, swarm, collaboration, delegation, pipelines] price: 14 author_url: "https://www.shopclawmart.com" support: "brian@gorzelic.net" license: proprietary osps_version: "0.1"
Hivemind
Version: 1.0.0 Price: $14 Type: Skill
Description
A single agent hits a ceiling fast -- limited context window, one reasoning thread, no specialization. The answer isn't a smarter model; it's multiple agents with distinct roles, coordinated by patterns borrowed from distributed systems and swarm intelligence. Hivemind gives you the orchestration playbook for multi-agent systems that actually work in production.
The hard part of multi-agent systems isn't spawning agents -- it's coordination. How does the supervisor know when to intervene? How do you prevent two agents from contradicting each other? How do you decompose a task so agents can work in parallel without stepping on each other's state? These are distributed systems problems wearing an AI costume, and they require distributed systems solutions.
This skill covers the three dominant orchestration patterns (supervisor, peer-to-peer, hierarchical), communication protocols between agents, task decomposition strategies, consensus mechanisms for conflicting outputs, and the observability you need to debug a system where five agents are all talking at once.
Prerequisites
- LLM API access (Anthropic, OpenAI, or compatible provider)
- Python 3.11+ or Node.js 18+ runtime
- Understanding of single-agent patterns (see Phantom skill)
- For production: message queue or event bus (Redis, RabbitMQ, or in-process)
- Structured logging with correlation IDs
Setup
- Copy
SKILL.mdinto your OpenClaw skills directory - Set your LLM provider credentials:
export ANTHROPIC_API_KEY="sk-ant-..." - Reload OpenClaw
Commands
- "Design a multi-agent system for [task domain]"
- "Implement a supervisor agent that delegates to [specialist agents]"
- "Build a fan-out/fan-in pipeline for [parallel task]"
- "Add consensus to resolve conflicting agent outputs"
- "Decompose [complex task] into sub-tasks for parallel agents"
- "Set up agent communication for [workflow type]"
- "Build a hierarchical agent system with [N levels]"
- "Monitor and trace a multi-agent execution"
- "Handle agent failures in a [pipeline/swarm] architecture"
Workflow
Supervisor Pattern Implementation
- Role definition -- define each agent's specialty with a focused system prompt, dedicated tool set, and clear scope boundaries. A "research agent" should not also be writing code. Narrow scope produces better results and makes failures easier to diagnose. Document what each agent can and cannot do.
- Supervisor design -- the supervisor agent receives the user's task and decides: which specialist(s) to invoke, in what order, with what inputs. The supervisor sees agent descriptions (not their full prompts) and routes based on task requirements. It should not do substantive work itself -- its job is orchestration.
- Delegation protocol -- define a structured message format for task delegation: task ID, description, required output format, deadline (max iterations), and context from previous agents. The specialist returns a structured result with: status (complete/partial/failed), output, confidence score, and any caveats.
- Result aggregation -- the supervisor collects results from specialists and decides: accept the result, request revision with feedback, delegate to a different specialist, or synthesize multiple results into a final answer. This is where most multi-agent systems break down -- the aggregation logic must handle partial results, contradictions, and failures.
- Escalation paths -- when a specialist fails or produces low-confidence output, the supervisor has options: retry with more context, delegate to a more capable (expensive) model, break the subtask into smaller pieces, or escalate to a human. Define these paths explicitly.
- State management -- the supervisor maintains a task graph: which subtasks are pending, in-progress, complete, or failed. This graph drives the orchestration loop and enables parallel execution when subtasks are independent.
Fan-Out/Fan-In Pipeline
- Task decomposition -- analyze the input task and identify independent subtasks that can run in parallel. Use an LLM call (the "planner") to decompose, or use rule-based decomposition if the task structure is predictable. Output: a DAG of subtasks with dependencies.
- Parallel dispatch -- spawn agents for independent subtasks simultaneously. Each agent gets: its subtask description, relevant context (not the full task context -- this wastes tokens), output schema, and iteration limit. Use async/await or a task queue to manage concurrency.
- Progress tracking -- monitor each parallel agent's status. Implement timeouts per subtask -- if one agent hangs, it shouldn't block the entire pipeline. Use a "fastest N of M" pattern when you have redundant agents: dispatch to 3 agents, take the first 2 results that pass validation.
- Result merging -- collect completed subtask results and merge them into a coherent output. This is often another LLM call: "Given these subtask results, synthesize a final answer." The merge agent needs to handle: missing results (timed-out agents), contradictory results, and varying quality levels.
- Dependency resolution -- some subtasks depend on others' outputs. Execute the DAG in topological order: independent tasks first (fan-out), then dependent tasks using predecessors' outputs, then final synthesis (fan-in). Handle the case where a dependency fails and its downstream tasks cannot proceed.
- Error isolation -- a failure in one parallel branch should not poison the others. Implement error boundaries around each agent invocation. If agent B fails, agents A and C should still complete, and the merge step should work with partial results.
Agent Communication Protocol
- Message format -- define a standard message schema that all agents use: sender ID, recipient ID (or broadcast), message type (task, result, query, update), payload, correlation ID (for tracing), and timestamp. Structured messages prevent the "telephone game" degradation you get with freeform text passing.
- Channel types -- implement three communication channels: (a) direct messages between specific agents, (b) broadcast messages to all agents in a group, (c) a shared blackboard that any agent can read/write. Direct for delegation, broadcast for status updates, blackboard for shared state.
- Shared context management -- agents often need shared state: a document being collaboratively edited, a knowledge base being populated, or a plan being refined. Use a shared data structure with read/write locking to prevent conflicts. Each write includes the agent ID and rationale.
- Conflict resolution -- when two agents produce contradictory outputs, you need a resolution strategy: (a) supervisor decides, (b) agents debate and reach consensus, (c) majority vote among N agents, (d) confidence-weighted selection. Choose based on your domain -- creative tasks benefit from debate, factual tasks from confidence scoring.
- Backpressure -- if agents produce results faster than the supervisor can process them, you need backpressure. Implement a bounded queue per agent. When the queue is full, the agent pauses until the supervisor catches up. This prevents memory exhaustion in long-running pipelines.
Output Format
HIVEMIND -- MULTI-AGENT ARCHITECTURE
System: [System Name]
Pattern: [Supervisor / Fan-Out-Fan-In / Hierarchical / Peer-to-Peer]
Agents: [N total]
Date: [YYYY-MM-DD]
=== AGENT ROSTER ===
| Agent | Role | Model | Tools | Scope |
|-------|------|-------|-------|-------|
| [id] | [role] | [model] | [tool list] | [what it handles] |
=== TASK DECOMPOSITION ===
[DAG diagram showing task dependencies]
Independent: [tasks that can parallelize]
Sequential: [tasks with dependencies]
=== COMMUNICATION PROTOCOL ===
| Channel | Type | Participants | Purpose |
|---------|------|-------------|---------|
| [channel] | [direct/broadcast/blackboard] | [agents] | [what it carries] |
=== ORCHESTRATION FLOW ===
1. [Step with agent assignments and data flow]
2. [Step with merge/aggregation logic]
=== ERROR HANDLING ===
| Failure Mode | Detection | Recovery | Fallback |
|-------------|-----------|----------|----------|
| [failure] | [how detected] | [strategy] | [fallback] |
=== COST PROJECTION ===
| Agent | Calls/Run | Tokens/Call | Est. Cost |
|-------|-----------|-------------|-----------|
| [agent] | [N] | [N] | [$X.XX] |
| Total | | | [$X.XX] |
Common Pitfalls
- Supervisor bottleneck -- if every decision flows through one supervisor agent, it becomes a bottleneck and a single point of failure. Use hierarchical delegation for complex systems: team leads manage groups of specialists, and the top supervisor only handles cross-team coordination.
- Context duplication -- passing the full task context to every agent wastes tokens and money. Each agent should receive only the context relevant to its subtask. The supervisor maintains the full context; specialists see a filtered view.
- Agent role bleed -- when agent roles overlap, they duplicate work or produce contradictory results. Define explicit scope boundaries: "Agent A handles data retrieval. Agent B handles analysis. Agent A never interprets data. Agent B never fetches data."
- Ignoring agent ordering -- the order in which agents execute matters. A research agent that runs after the writing agent produces useless results. Map your dependencies as a DAG before implementing, and enforce topological execution order.
- No correlation IDs -- without a trace ID that follows a task through all agents, debugging multi-agent runs is impossible. Every message, every LLM call, every tool invocation must carry a correlation ID back to the original request.
- Unbounded fan-out -- decomposing a task into 50 parallel subtasks sounds efficient but produces chaos. More agents means more coordination overhead, more potential conflicts, and higher costs. Start with 3-5 parallel agents and increase only when you have evidence that more helps.
Guardrails
- Agent isolation. Each agent runs in its own context with its own tool set. One agent cannot directly access another agent's state, tools, or credentials. All inter-agent communication goes through defined channels.
- Cost caps per run. Every multi-agent execution has a total cost budget. When cumulative token usage across all agents reaches the cap, the system terminates gracefully with partial results. Individual agent budgets are also enforced.
- Deadlock detection. The orchestrator monitors for circular dependencies and blocked agents. If no agent has made progress in N seconds, the system triggers a timeout and returns whatever results are available.
- Output validation at boundaries. Every message passed between agents is validated against its expected schema. Malformed messages are rejected and the sending agent is asked to retry. This prevents error propagation across agent boundaries.
- No recursive spawning. Agents cannot spawn new agents without supervisor approval. This prevents runaway agent proliferation that exhausts resources. The supervisor controls the total agent count.
- Human escalation. When the supervisor cannot resolve a conflict between agents after N attempts, it escalates to a human with: the original task, each agent's output, and the specific point of disagreement.
- Audit trail. Every agent invocation, every message, and every decision is logged with timestamps and correlation IDs. The full execution history is available for post-mortem analysis.
Support
Questions or issues with this skill? Contact brian@gorzelic.net Published by SpookyJuice -- https://www.shopclawmart.com
Core Capabilities
- Multi Agent Orchestration
- Supervisor Patterns
- Agent Fan Out
- Agent Consensus
- Task Decomposition
Customer ratings
0 reviews
No ratings yet
- 5 star0
- 4 star0
- 3 star0
- 2 star0
- 1 star0
No reviews yet. Be the first buyer to share feedback.
Version History
This skill is actively maintained.
March 8, 2026
v1.0.0 — Wave 4 launch: Multi-agent orchestration with swarm intelligence patterns
One-time purchase
$14
By continuing, you agree to the Buyer Terms of Service.
Creator
SpookyJuice.ai
An AI platform that builds, monitors, and evolves itself
Multiple AI agents and one human collaborate around the clock — writing code, deploying infrastructure, and growing a shared knowledge graph. This page is a live dashboard of the running system. Everything you see is real data, updated in real time.
View creator profile →Details
- Type
- Skill
- Category
- Engineering
- Price
- $14
- Version
- 1
- License
- One-time purchase
Works great with
Personas that pair well with this skill.
TG Money Machine — Telegram Monetization Operator
Persona
Turn any Telegram bot into a revenue engine — with an AI operator built from 12 live monetization projects processing 500K+ Stars.
$49
TG Shop Architect — Telegram E-Commerce Operator
Persona
Build, deploy, and scale production Telegram stores — with an AI architect forged from real e-commerce operations handling thousands of orders and real money.
$49
TG Forge — Telegram Bot Operator
Persona
Build, deploy, and scale production Telegram bots — with an AI operator forged from 17 live bots across 7 servers.
$49