π¦ ClawHub
Token Optimization
by @jack-yang-ai
Reduce OpenClaw per-turn prompt costs by 70%+ through file splitting, prompt caching, context pruning, and model routing. Tested on production setup with 69...
βοΈ Configuration
openclaw.jsonπ Tips & Best Practices
| Symptom | Cause | Fix |
|---------|-------|-----|
| Cache still 0% | Model doesn't support caching | Check provider is Anthropic |
| High cacheWrite every turn | Volatile content in system prompt | Move volatile files to on-demand |
| Context > 50% quickly | Pruning too loose | Lower ttl and softTrimRatio |
| Compactions > 3/day | Long conversations without pruning | Enable cache-ttl mode |
TERMINAL
clawhub install token-optimization