🦀 ClawHub
Prompt Cache
by @nissan
SHA-256 prompt deduplication for LLM and TTS calls — hash normalize prompts, check cache before calling APIs, store results for instant replay. Use when maki...
📦 Core Types
💡 Examples
import prompt_cacheCheck before calling expensive API
cached = await prompt_cache.get_cached(
prompt="Tell me a story about clouds",
child_name="Sophie",
language="fr"
)if cached:
return cached # Free! No API call needed.
Cache miss — call the API
result = await generate_story(prompt, child_name, language)Store for next time
await prompt_cache.set_cached(prompt, child_name, language, result)
TERMINAL
clawhub install prompt-cache