π¦ ClawHub
Local Inference Context
by @joekravelli
Context management for self-hosted LLM backends (llama.cpp, Ollama). Prevents mid-task 503 errors and context overflows caused by VRAM-limited KV caches. Use...
TERMINAL
clawhub install local-inference-context