Memory Search — Early High-Impact Setup
Without memory search, your agent wakes up every session with amnesia. It can read files it knows about, but it can't recall — it can't find the lesson it wrote last Tuesday, the decision you made about API design, or the mistake it already learned from.
Why It Compounds
- Every lesson, decision, and preference becomes findable, not just written down
- Agents self-correct faster because they can check "have I seen this before?"
- The more your agent writes to daily notes and MEMORY.md, the more valuable search becomes
- Daily notes become a searchable knowledge base, not just a log
Quick Setup (~10 minutes)
- Install Ollama and pull an embedding model:
ollama pull bge-m3 - Add
memorySearchconfig toopenclaw.json:
jsonc
{
"agents": {
"defaults": {
"memorySearch": {
"enabled": true,
"provider": "ollama",
"model": "bge-m3",
"remote": { "baseUrl": "http://127.0.0.1:11434" }
}
}
}
}- Restart the gateway
Zero API cost, no rate limits, works offline.
TIP
bge-m3 (1.2 GB) for quality, nomic-embed-text (274 MB) if RAM is tight. Cloud providers (OpenAI, Gemini, Voyage) also supported if you'd rather not run Ollama.
For full configuration, hybrid search tuning, model comparison, and troubleshooting, see the Memory Search guide.