Skip to content

Bootstrap Context

Every OpenClaw prompt starts with injected workspace files — AGENTS.md, SOUL.md, TOOLS.md, IDENTITY.md, USER.md, MEMORY.md, HEARTBEAT.md — before the conversation even begins. This section covers how to audit, compact, and optimize that bootstrap payload so the agent actually sees the rules it's supposed to follow.

This is the highest-impact optimization area. At scale, bootstrap context is injected on every API call, and AGENTS.md truncation silently drops operational protocols the agent needs.

Key Problems

Token cost at scale

Large workspace files are injected on every call. Even with high cache hit rates, the raw volume adds up and still counts against context window limits.

Truncation is silent and destructive

AGENTS.md can exceed the system prompt character budget. The middle portion gets cut silently, dropping sections like circuit breaker protocols and heartbeat behavior — actual operational knowledge the agent needs.

Triple-stated rules drift

The same rule appears in AGENTS.md, SOUL.md, and MEMORY.md with slightly different wording. Three versions of a rule eventually diverge, and the model picks whichever it hits first (primacy bias).

No conditional loading

Everything loads on every call. Discord channel tables load in Telegram sessions. Origin stories load in cron jobs. There is no mechanism to say "only inject this when relevant."

What's Here

  • Light Context Mode — Skip workspace file injection for crons and heartbeats that don't need it. The single biggest token savings for high-frequency jobs.

Tracks

  • Token audits — point-in-time measurements of what is being injected and what it costs
  • Compaction strategies — reducing token count without losing signal
  • Deduplication — finding and eliminating redundant rules across files
  • Conditional loading — patterns for context-sensitive injection

Status

Light context documented. Token audit tooling and compaction strategies are next.

Built with OpenClaw 🤖