token-sisyphus
by @neardws
Burn LLM tokens toward a target count to satisfy corporate AI usage KPIs. Trigger when user says: burn tokens, consume tokens, fill KPI, push the boulder, si...
Run the bundled script directly:
python {skillDir}/scripts/burn.py --target [options] --target Token count: 50000, 100k, 1m (required)
--provider openai | claude | gemini (default: openai)
--model Model name (omit to use provider default)
--api-key API key (falls back to env var)
--base-url Custom endpoint URL (openai provider only)
--max-tokens Max tokens per request (default: 500)
--delay Seconds between requests (default: 0.5)
--dry-run Simulate without real API calls
Install the SDK for your chosen provider:
pip install openai # for openai provider (default)
pip install anthropic # for claude provider
pip install google-generativeai # for gemini provider
Set the corresponding env var:
| Provider | Env var |
|----------|---------|
| OpenAI / compatible | OPENAI_API_KEY |
| Claude | ANTHROPIC_API_KEY |
| Gemini | GEMINI_API_KEY |
clawhub install token-sisyphus