🎁 Get the FREE AI Skills Starter Guide β€” Subscribe β†’
BytesAgainBytesAgain
πŸ¦€ ClawHub

Token Optimization

by @jack-yang-ai

Reduce OpenClaw per-turn prompt costs by 70%+ through file splitting, prompt caching, context pruning, and model routing. Tested on production setup with 69...

Versionv1.1.0
Installs2
βš™οΈ Configuration

  • OpenClaw 2026.3.x or later
  • Access to edit openclaw.json
  • At least one Anthropic model configured

  • πŸ“‹ Tips & Best Practices

    | Symptom | Cause | Fix | |---------|-------|-----| | Cache still 0% | Model doesn't support caching | Check provider is Anthropic | | High cacheWrite every turn | Volatile content in system prompt | Move volatile files to on-demand | | Context > 50% quickly | Pruning too loose | Lower ttl and softTrimRatio | | Compactions > 3/day | Long conversations without pruning | Enable cache-ttl mode |


    View on ClawHub
    TERMINAL
    clawhub install token-optimization

    πŸ§ͺ Use this skill with your agent

    Most visitors already have an agent. Pick your environment, install or copy the workflow, then run the smoke-test prompt above.

    πŸ” Can't find the right skill?

    Search 60,000+ AI agent skills β€” free, no login needed.

    Search Skills β†’