EngramEngramdocs
v0.1.0
Search docs…⌘K
GitHub
Integrations

OpenClaw

Engram adds persistent memory to OpenClaw agents. The recommended setup installs Engram as a native OpenClaw plugin, giving every agent memory_recall, memory_store, engram_search, memory_forget, memory_list, and memory_stats tools with automatic context injection.

Engram takes over the memory slot. When configured with plugins.slots.memory: "engram", OpenClaw routes memory operations through Engram instead of its built-in session store. Engram's tools (memory_recall, memory_store, engram_search, memory_forget, memory_list, memory_stats) are registered alongside the built-in tools.

What changes vs. OpenClaw's native memory

OpenClaw's built-in memory is session-scoped — it resets every time the agent process restarts. Engram replaces it with memory that persists indefinitely across restarts, deployments, and machines.

OpenClaw nativeEngram
Survives restart
Cross-agent sharing
Semantic search
Memory typessingle flat storeepisodic / semantic / procedural
Importance scoring
Knowledge graph
Make sure the Engram server is running before using any integration method. Default port: 4901.

Switching from another memory plugin

When plugins.slots.memory is set to "engram", Engram fills the memory slot. Any plugin that previously held it (memory-core, memory-lancedb, or a custom plugin) steps back. Your old memories are not deleted — they remain in their original storage — but agents will no longer see them automatically.

Option 1 — Fresh start (recommended)

Don't migrate anything. Engram begins empty and builds memory from your conversations going forward. This is the cleanest path and works well for most users. Disable your old plugin in openclaw.json and proceed to the installation steps below.

// openclaw.json — disable the old plugin
{
  "plugins": {
    "entries": {
      "memory-core":    { "enabled": false },
      "memory-lancedb": { "enabled": false }
    }
  }
}

Option 2 — Export and import existing memories

If you have important memories to preserve, export them from your old system and POST each one to Engram's REST API. The import format is the same regardless of source:

curl -X POST http://localhost:4901/api/memory \
  -H 'Content-Type: application/json' \
  -d '{
    "content":    "your memory content here",
    "type":       "semantic",
    "importance": 0.7,
    "source":     "migration"
  }'

# type options:
#   semantic   — facts, knowledge, preferences  (use for most memories)
#   episodic   — past events and conversations
#   procedural — how-to steps and patterns

From memory-lancedb

Use the openclaw ltm CLI to list memories, then POST each to Engram:

# List all stored memories
openclaw ltm search "." --limit 100

# For each entry, POST to Engram using the snippet above

From memory-core

memory-core has no bulk export command. Use openclaw memory search to find entries by topic, then copy them into Engram one by one:

openclaw memory search "." --max-results 100
# Copy each result and POST to Engram using the snippet above

From a custom system (PostgreSQL, REST API, files)

Fetch all records from your existing store, map them to Engram's format, and POST each to /api/memory. For bulk imports a short Python or TypeScript script is easiest — see the REST Clients page for boilerplate.

If you were running a memory-sync cron job (a script that periodically writes memories to a database or file), remove that cron after migrating so it does not overwrite Engram's data or conflict with it.

Option A — OpenClaw Plugin (recommended)

Install Engram as a native OpenClaw memory plugin. Every agent gets memory_recall, memory_store, engram_search, memory_forget, memory_list, and memory_stats tools. The plugin also:

  • Auto-recalls — injects relevant memories before each agent turn via before_agent_start
  • Auto-stores conversations — saves every exchange as episodic memory via message_received and message_sent hooks
  • Normalizes importance — accepts both 0–1 and legacy 1–5 scales

1. Create the plugin directory

mkdir -p ~/.openclaw/plugins/engram

2. Create openclaw.plugin.json

# ~/.openclaw/plugins/engram/openclaw.plugin.json
{
  "id": "engram",
  "kind": "memory",
  "configSchema": {
    "type": "object",
    "additionalProperties": false,
    "properties": {
      "url": { "type": "string" },
      "source": { "type": "string" },
      "autoRecall": { "type": "boolean" },
      "maxTokens": { "type": "number" }
    }
  }
}

3. Create package.json

# ~/.openclaw/plugins/engram/package.json
{
  "name": "@openclaw/engram",
  "version": "1.0.0",
  "type": "module",
  "openclaw": {
    "extensions": ["./index.js"]
  }
}

4. Create index.js

This is the full plugin. Copy it as-is — zero dependencies, uses native fetch.

# ~/.openclaw/plugins/engram/index.js

const engramPlugin = {
  id: "engram",
  name: "Memory (Engram)",
  description: "Persistent semantic memory via Engram REST API",
  kind: "memory",

  register(api) {
    const cfg = api.pluginConfig ?? {};
    const baseUrl = cfg.url ?? process.env.ENGRAM_API ?? "http://localhost:4901";
    const source = cfg.source ?? "openclaw";
    const maxTokens = cfg.maxTokens ?? 1500;
    const autoRecall = cfg.autoRecall !== false;

    // ── HTTP helpers ──

    async function engramGet(path) {
      const res = await fetch(`${baseUrl}${path}`, { signal: AbortSignal.timeout(5000) });
      if (!res.ok) throw new Error(`Engram HTTP ${res.status}`);
      return res.json();
    }

    async function engramPost(path, body) {
      const res = await fetch(`${baseUrl}${path}`, {
        method: "POST",
        headers: { "Content-Type": "application/json" },
        body: JSON.stringify(body),
        signal: AbortSignal.timeout(5000),
      });
      if (!res.ok) {
        let detail = "";
        try { detail = ` — ${JSON.stringify(await res.json())}`; } catch {}
        throw new Error(`Engram HTTP ${res.status}${detail}`);
      }
      return res.json();
    }

    // ── Tool: memory_recall ──

    api.registerTool(
      {
        name: "memory_recall",
        label: "Memory Recall",
        description: "Recall relevant context from Engram long-term memory.",
        parameters: {
          type: "object",
          properties: {
            query: { type: "string", description: "What to look up in memory" },
            maxTokens: { type: "number", description: "Max context tokens (default: 1500)" },
          },
          required: ["query"],
        },
        async execute(_id, params) {
          const { query, maxTokens: mt = maxTokens } = params;
          try {
            const result = await engramPost("/api/recall", { query, maxTokens: mt, source });
            const count = result.memories?.length ?? 0;
            if (!result.context || count === 0)
              return { content: "No relevant memories found." };
            return { content: result.context, details: { count } };
          } catch (err) {
            return { content: `Memory unavailable: ${err.message}` };
          }
        },
      },
      { name: "memory_recall" },
    );

    // ── Tool: memory_store ──

    api.registerTool(
      {
        name: "memory_store",
        label: "Memory Store",
        description: "Save important information to Engram long-term memory.",
        parameters: {
          type: "object",
          properties: {
            content: { type: "string", description: "Information to store" },
            type: { type: "string", enum: ["episodic", "semantic", "procedural"],
              description: "episodic = events, semantic = facts, procedural = how-to. Default: semantic" },
            importance: { type: "number", description: "Importance 0–1 (default: 0.7)" },
          },
          required: ["content"],
        },
        async execute(_id, params) {
          const { content, type: memType = "semantic" } = params;
          // Normalize importance: accept 0–1 or legacy 1–5 scale
          let importance = params.importance ?? 0.7;
          if (importance > 1) importance = Math.min(importance / 5, 1);
          try {
            const result = await engramPost("/api/memory", { content, type: memType, importance, source });
            return { content: `Stored (id: ${result.id})` };
          } catch (err) {
            return { content: `Store failed: ${err.message}` };
          }
        },
      },
      { name: "memory_store" },
    );

    // ── Tool: engram_search ──

    api.registerTool(
      {
        name: "engram_search",
        label: "Engram Search",
        description: "Semantic vector search across all Engram memories.",
        parameters: {
          type: "object",
          properties: {
            query: { type: "string", description: "Search query" },
            topK: { type: "number", description: "Max results (default: 10)" },
            threshold: { type: "number", description: "Similarity threshold 0–1 (default: 0.3)" },
          },
          required: ["query"],
        },
        async execute(_id, params) {
          const { query, topK = 10, threshold = 0.3 } = params;
          try {
            const result = await engramPost("/api/search", { query, topK, threshold });
            const results = result.results ?? [];
            if (results.length === 0)
              return { content: "No results found." };
            const text = results.map((r, i) =>
              `${i + 1}. [${r.type ?? "memory"}] ${r.content} (${((r.score ?? 0) * 100).toFixed(0)}%)`
            ).join("\n");
            return { content: text, details: { count: results.length } };
          } catch (err) {
            return { content: `Search failed: ${err.message}` };
          }
        },
      },
      { name: "engram_search" },
    );

    // ── Tool: memory_forget ──

    api.registerTool(
      {
        name: "memory_forget",
        label: "Memory Forget",
        description: "Delete (archive) a memory from Engram by its ID.",
        parameters: {
          type: "object",
          properties: {
            id: { type: "string", description: "Memory ID to delete" },
          },
          required: ["id"],
        },
        async execute(_id, params) {
          const { id } = params;
          try {
            const res = await fetch(`${baseUrl}/api/memory/${encodeURIComponent(id)}`, {
              method: "DELETE", signal: AbortSignal.timeout(5000),
            });
            if (res.status === 204 || res.ok) return { content: `Deleted memory ${id}` };
            return { content: `Failed to delete ${id}: HTTP ${res.status}` };
          } catch (err) {
            return { content: `Delete failed: ${err.message}` };
          }
        },
      },
      { name: "memory_forget" },
    );

    // ── Tool: memory_list ──

    api.registerTool(
      {
        name: "memory_list",
        label: "Memory List",
        description: "List all memories with optional filtering and pagination.",
        parameters: {
          type: "object",
          properties: {
            type: { type: "string", enum: ["episodic", "semantic", "procedural"] },
            source: { type: "string", description: "Filter by source tag" },
            limit: { type: "number", description: "Max results (default: 50, max: 200)" },
            offset: { type: "number", description: "Pagination offset (default: 0)" },
          },
        },
        async execute(_id, params) {
          const { type, source, limit = 50, offset = 0 } = params;
          try {
            const qs = new URLSearchParams();
            if (type) qs.set("type", type);
            if (source) qs.set("source", source);
            qs.set("limit", String(Math.min(limit, 200)));
            qs.set("offset", String(offset));
            const result = await engramGet(`/api/memory?${qs}`);
            const memories = result.memories ?? [];
            if (memories.length === 0)
              return { content: `No memories found (total: ${result.count ?? 0})` };
            const text = memories.map((m, i) =>
              `${offset + i + 1}. [${m.type}] id=${m.id} imp=${m.importance ?? "?"}\n   ${(m.content ?? "").slice(0, 120)}`
            ).join("\n");
            return { content: `${result.count ?? memories.length} total (showing ${memories.length}):\n\n${text}` };
          } catch (err) {
            return { content: `List failed: ${err.message}` };
          }
        },
      },
      { name: "memory_list" },
    );

    // ── Tool: memory_stats ──

    api.registerTool(
      {
        name: "memory_stats",
        label: "Memory Stats",
        description: "Get Engram brain statistics: total count, type breakdown, graph size.",
        parameters: { type: "object", properties: {} },
        async execute() {
          try {
            const s = await engramGet("/api/stats");
            return { content: [
              `Total: ${s.total}`,
              `By type: episodic=${s.byType?.episodic ?? 0}, semantic=${s.byType?.semantic ?? 0}, procedural=${s.byType?.procedural ?? 0}`,
              `Graph: ${s.graphNodes ?? 0} nodes, ${s.graphEdges ?? 0} edges`,
            ].join("\n"), details: s };
          } catch (err) {
            return { content: `Stats failed: ${err.message}` };
          }
        },
      },
      { name: "memory_stats" },
    );

    // ── Auto-recall: inject memories before each agent turn ──

    if (autoRecall) {
      api.on("before_agent_start", async (event) => {
        if (!event.prompt || event.prompt.length < 10) return;
        try {
          const result = await engramPost("/api/recall", {
            query: event.prompt, maxTokens: Math.min(maxTokens, 800), source,
          });
          if (result.context && result.memories?.length > 0)
            return { prependContext: result.context };
        } catch { /* degrade silently */ }
      }, { name: "engram-auto-recall", description: "Inject engram memories before agent turn" });
    }

    // ── Auto-store: save every conversation exchange as episodic memory ──

    api.on("message_received", async (event) => {
      const text = event.text ?? event.content ?? "";
      if (!text || text.length < 5) return;
      try {
        await engramPost("/api/memory", {
          content: `User: ${text.slice(0, 1000)}`, type: "episodic", importance: 0.5, source,
        });
      } catch { /* skip silently */ }
    }, { name: "engram-store-received", description: "Store user message in engram" });

    api.on("message_sent", async (event) => {
      const text = event.text ?? event.content ?? "";
      if (!text || text.length < 10) return;
      try {
        await engramPost("/api/memory", {
          content: text.slice(0, 1000), type: "episodic", importance: 0.4, source,
        });
      } catch { /* skip silently */ }
    }, { name: "engram-store-sent", description: "Store assistant reply in engram" });

    // ── Health check on startup ──

    api.registerService({
      id: "engram",
      start: async () => {
        try {
          const health = await engramGet("/api/health");
          api.logger.info(`engram: connected to ${baseUrl} (v${health.version ?? "?"})`);
        } catch {
          api.logger.warn(`engram: not reachable at ${baseUrl} — tools will degrade`);
        }
      },
      stop: () => { api.logger.info("engram: stopped"); },
    });
  },
};

export default engramPlugin;

5. Update ~/.openclaw/openclaw.json

Merge the following into your existing openclaw.json. If you already have a plugins block, add the new keys into it — do not replace the whole block.

{
  "plugins": {
    "allow": ["engram"],
    "load": {
      "paths": ["~/.openclaw/plugins"]
    },
    "slots": {
      "memory": "engram"
    },
    "entries": {
      "engram": {
        "enabled": true,
        "config": {
          "url": "http://localhost:4901",
          "source": "openclaw",
          "autoRecall": true,
          "maxTokens": 1500
        }
      }
    }
  }
}
plugins.allow is required. Without it, OpenClaw will warn that a non-bundled plugin was discovered but not explicitly trusted, and the plugin may not load. List every non-bundled plugin id you use — e.g. "allow": ["engram"]. If you already have other third-party plugins, append "engram" to the existing array.
autoRecall: true injects the most relevant memories before each agent turn automatically. Conversations are also auto-stored as episodic memories via message_sent and message_received hooks — no manual calls needed. Set autoRecall to false if you prefer agents to call memory_recall explicitly.

6. Restart the gateway

openclaw gateway restart

Option B — withMemory() helper (TypeScript / JavaScript)

withMemory() is a single async call that fetches the most relevant Engram context for a query and returns it as a plain string, ready to inject into any prompt. It fails silently — if Engram is unreachable it returns an empty string so your agent continues normally.

1. Install the adapter

npm install @engram-ai-memory/adapter-openclaw
# or
pnpm add @engram-ai-memory/adapter-openclaw

2. Recall context before your prompt

import { withMemory } from '@engram-ai-memory/adapter-openclaw'

const context = await withMemory('What did the user ask about last time?', {
  url: 'http://localhost:4901',   // optional — defaults to ENGRAM_API env var
  source: 'my-agent',            // optional — tags stored memories by source
  maxTokens: 1500,               // optional — caps context length (default 1500)
})

// context is a formatted string — inject it into your system prompt
const systemPrompt = context
  ? `${context}\n\n--- your instructions below ---`
  : 'Your instructions here'

3. Store the agent reply

After your agent responds, store the output so it can be recalled in future runs:

import { EngramClient } from '@engram-ai-memory/adapter-openclaw'

const engram = new EngramClient({ url: 'http://localhost:4901', source: 'my-agent' })

await engram.store(
  'User asked about rate limiting — settled on token bucket at 100 req/min',
  'episodic',
  { importance: 0.8 }
)

Option C — EngramClient (full control)

Use EngramClient directly when you need fine-grained control over what gets recalled, stored, or searched — for example inside lifecycle hooks or a custom agent loop.

import { EngramClient } from '@engram-ai-memory/adapter-openclaw'

const engram = new EngramClient({
  url: 'http://localhost:4901',  // or set ENGRAM_API env var
  source: 'my-agent',           // identifies this agent in stored memories
  timeoutMs: 5000,              // request timeout (default 5000)
})

Client API

// Recall relevant context for a query
// Returns { context: string, memories: [...], latencyMs: number }
await engram.recall(query: string, maxTokens?: number)

// Store a memory
// type defaults to 'episodic'
await engram.store(content: string, type?: 'episodic' | 'semantic' | 'procedural', {
  importance?: number,   // 0.0–1.0
  tags?: string[],
  sessionId?: string,
})

// Semantic search — returns raw result array
await engram.search(query: string, {
  topK?: number,       // default 10
  threshold?: number,  // minimum similarity 0.0–1.0, default 0.3
  types?: string[],    // filter by memory type
})

// List all memories with optional filtering and pagination
// Returns { count: number, memories: MemoryEntry[] }
await engram.list({
  type?: string,       // filter: 'episodic' | 'semantic' | 'procedural'
  source?: string,     // filter by source tag
  limit?: number,      // default 50, max 200
  offset?: number,     // pagination offset, default 0
})

// Delete (archive) a memory by ID
await engram.forget(id: string)

// Get a single memory by ID
// Returns full MemoryEntry with content, type, importance, tags, etc.
await engram.getById(id: string)

// Memory statistics
// Returns { total, byType, bySource, graphNodes, graphEdges, indexSize }
await engram.stats()

// Health check — returns true if Engram is reachable
await engram.ping()

What gets injected

The context string returned by recall() and withMemory() looks like this — inject it verbatim at the top of your system prompt:

[NEURAL MEMORY CONTEXT]

[KNOWLEDGE]
• Postgres chosen over SQLite for production (importance 0.9)

[PATTERNS & SKILLS]
• When user asks about deploys → check migration status first

[PAST EVENTS & CONVERSATIONS]
• 2026-03-20: Discussed rate limiting — settled on token bucket

[END MEMORY CONTEXT]

Verify it works

# Check Engram is reachable
curl http://localhost:4901/api/health

# Check stored memories
curl http://localhost:4901/api/stats

# Search for a specific memory
curl -s -X POST http://localhost:4901/api/search \
  -H 'Content-Type: application/json' \
  -d '{"query":"rate limiting","topK":3}' | jq '.'
If the health check fails, confirm the Engram server is running and that the URL in your config matches the server port (default 4901).
OpenClaw Integration — Engram Docs