๐ŸŽ Get the FREE AI Skills Starter Guide โ€” Subscribe โ†’
BytesAgainBytesAgain
๐Ÿฆ€ ClawHub

A LLM router skill for OpenClaw

by @fanyadan

LangGraph-based intelligent task router that splits work between PRO (heavy reasoning) and FLASH (fast) models using 5-dimension complexity scoring, configur...

๐Ÿ’ก Examples

Basic Usage (via exec)

When user says "่ตฐ super-router", "use super-router", or asks for router analysis:

# Direct execution with task as argument
terminal(command="/opt/homebrew/Caskroom/miniforge/base/bin/python ~/.openclaw/skills/super-router/scripts/router.py 'ๅˆ†ๆž K8s YAML ้”™่ฏฏๅนถ้‡ๅ†™้…็ฝฎ'")

With Streaming (Node-Level Progress)

terminal(command="/opt/homebrew/Caskroom/miniforge/base/bin/python ~/.openclaw/skills/super-router/scripts/router.py --stream 'Your complex task'")

Via Environment Variable (Agent Compatibility)

For agents that struggle with non-ASCII arguments:

# Normalize task to short ASCII English, then pass as argument
terminal(command="/opt/homebrew/Caskroom/miniforge/base/bin/python ~/.openclaw/skills/super-router/scripts/router.py 'Analyze K8s YAML errors and fix'")

Or via env var (if agent supports it)

terminal(command="/opt/homebrew/Caskroom/miniforge/base/bin/python ~/.openclaw/skills/super-router/scripts/router.py", env={"ROUTER_TASK": "Your complex task description"})

Handling Long-Running Execution

If exec returns "Command still running":

# Continue polling with process tool
process(action="poll", session_id="")

Wait for completion

process(action="wait", session_id="", timeout=300)

Important: Once process shows completion, your next assistant message MUST start with Router result: or Router failed: and include at least one real detail from the output (e.g., "Planner fallback", "Ollama timed out", "BTC"). Never reply with just ---, punctuation, or empty lines.

๐Ÿ“‹ Tips & Best Practices

"Router timed out" / "Ollama returned an empty response"

  • Best fix when keeping a large Planner: keep ROUTER_PLANNER_MODEL=gemma4:26b, but set ROUTER_JUDGE_MODEL=llama3.1:8b.
  • All-gemma mode: set ROUTER_JUDGE_MODEL=gemma4:26b, ROUTER_JUDGE_TIMEOUT=600, and ROUTER_MAX_CONCURRENCY=1; expect much longer runs.
  • Use --stream and increase the terminal/process timeout if the Planner itself may take longer than 60s.
  • Set ROUTER_JUDGE_TIMEOUT=300 or higher only when intentionally using a 20B+ Judge.
  • Alternative: use Gemini CLI for planning: ROUTER_PLANNER_MODEL=google-gemini-cli/gemini-3-pro-preview.
  • "Planner timed out after 30s" (or 90s)

  • Model is too large or not loaded. Warmup helps but large models may still timeout.
  • Use --stream plus a longer terminal/process timeout, or choose a smaller planner model.
  • Check Ollama logs: ollama serve output for errors
  • "FLASH kept escalating to PRO"

  • Task may genuinely require heavy reasoning
  • Check if FLASH model is too small for your tasks
  • Try setting ROUTER_FLASH_MODEL to a larger model
  • "Gemini CLI AbortError or Auth Failures"

  • If gemini-cli returns AbortError or authentication errors in non-interactive sessions, this is often an infrastructure/API timeout or session issue.
  • Use --stream to monitor real-time progress and ensure ROUTER_JUDGE_TIMEOUT and terminal timeouts are sufficiently high to prevent external process termination.
  • "Planner produced only one subtask"

  • Task may be simple enough to not need decomposition
  • Planner model may be too small; try ROUTER_PLANNER_MODEL=gemma4:31b (if you have the patience for 90s+ waits)
  • View on ClawHub
    TERMINAL
    clawhub install super-router

    ๐Ÿงช Use this skill with your agent

    Most visitors already have an agent. Pick your environment, install or copy the workflow, then run the smoke-test prompt above.

    ๐Ÿ” Can't find the right skill?

    Search 60,000+ AI agent skills โ€” free, no login needed.

    Search Skills โ†’