Ollama Memory Embeddings by @vidarbrekke
Configure OpenClaw memory search to use Ollama as the embeddings server (OpenAI-compatible /v1/embeddings) instead of the built-in node-llama-cpp local GGUF loading. Includes interactive model selection and optional import of an existing local embedding GGUF into Ollama.
π Tips & Best Practices
This does not modify OpenClaw package code. It only updates user config.
A timestamped backup of config is written before changes.
If no local GGUF exists, install proceeds by pulling the selected model from Ollama.
Model names are normalized with :latest tag for consistent Ollama interaction.
If embedding model changes, rebuild/re-embed existing memory vectors to avoid
retrieval mismatch across incompatible vector spaces.
With --reindex-memory auto, installer reindexes only when the effective
embedding fingerprint changed (
provider,
model,
baseUrl,
apiKey presence).
Drift checks require a non-empty apiKey but do not require a literal "ollama" value.
Config backups are created only when a write is needed.
Legacy schema fallback is supported: if agents.defaults.memorySearch is absent,
the enforcer reads known legacy paths and mirrors writes to preserve compatibility.
βΈ Show full description clawhub install ollama-memory-embeddingsCopy
π§ͺ Use this skill with your agent Most visitors already have an agent. Pick your environment, install or copy the workflow, then run the smoke-test prompt above.
π Can't find the right skill?
Search 60,000+ AI agent skills β free, no login needed.
Search Skills β