← Back to blog
2026-03-199 min read

How to Set Up Persistent Memory for Claude Code and Cursor

tutorialmcpdeveloper-tools
claude-code-cursor-memory-setup.md

How to Set Up Persistent Memory for Claude Code and Cursor

Claude Code and Cursor are two of the most powerful AI coding assistants in 2026. But they share the same fundamental limitation: they forget everything between sessions.

Start a new conversation with Claude Code, and it doesn't remember your codebase preferences, the architectural decisions you made last week, or the debugging patterns that worked for your specific stack. Cursor resets the same way — every session is a cold start.

shodh-memory fixes this using the Model Context Protocol (MCP). Setup takes under 5 minutes. Here's how.

How It Works

MCP (Model Context Protocol) is a standard that lets AI assistants connect to external tools and data sources. shodh-memory implements an MCP server that exposes 45 memory tools — remember, recall, forget, proactive context, todos, and more.

When connected, your AI assistant can:

**Automatically store** observations, decisions, and learnings
**Recall** relevant context when you start a new session
**Strengthen** frequently-used knowledge (Hebbian learning)
**Let unused knowledge decay** naturally (biologically plausible forgetting curves)
**Track todos** across sessions with a built-in GTD system

Everything runs locally. No cloud. No API keys. No data leaving your machine.

Setup for Claude Code

Step 1: Install

```

npm install -g @shodh/memory-mcp

```

Step 2: Add to Claude Code MCP config

Open your Claude Code settings and add shodh-memory as an MCP server:

```

{

"mcpServers": {

"shodh-memory": {

"command": "npx",

"args": ["-y", "@shodh/memory-mcp"]

}

}

}

```

That's it. Claude Code now has access to 45 memory tools.

Step 3: Enable Hooks (Optional, Recommended)

For automatic memory capture, add hooks to your project's `.claude/settings.json`:

```

{

"hooks": {

"on_tool_use": "npx @shodh/memory-mcp hook-tool",

"on_message": "npx @shodh/memory-mcp hook-message"

}

}

```

With hooks enabled, shodh-memory automatically captures context from tool usage and conversations. You don't need to explicitly tell Claude to remember things — it happens in the background.

Setup for Cursor

Step 1: Install

```

npm install -g @shodh/memory-mcp

```

Step 2: Add to Cursor MCP config

In Cursor, go to Settings → MCP Servers → Add Server:

```

{

"shodh-memory": {

"command": "npx",

"args": ["-y", "@shodh/memory-mcp"]

}

}

```

Cursor will now show shodh-memory's tools in the MCP panel.

What Happens After Setup

First Session

Use your AI assistant normally. As you work, shodh-memory captures:

Your coding preferences ("user prefers Rust for systems code")
Architectural decisions ("chose JWT over session cookies for auth")
Debugging patterns ("the CORS issue was caused by missing headers in the middleware")
File structure knowledge ("src/handlers/ contains all API route handlers")

These are stored locally in RocksDB with vector embeddings, entity extraction, and knowledge graph connections — all computed on your machine using MiniLM-L6-v2.

Second Session

When you start a new session, shodh-memory's proactive context surfaces relevant memories based on what you're working on. If you open a file in the auth module, memories about your JWT decisions, session handling, and that CORS fix automatically appear in context.

You don't need to re-explain your stack, your preferences, or your project structure. The assistant already knows.

After a Week

Frequently accessed knowledge has promoted from working memory to long-term storage. The knowledge graph has dense clusters around your most-discussed topics. Memories you never revisited have begun to decay, naturally reducing noise.

The assistant surfaces context with increasing accuracy. It knows not just what you said, but which things you keep coming back to — and those are the things it remembers best.

Memory Tools Available

Once connected, your AI assistant has access to these core tools:

| Tool | Purpose |
| --- | --- |
| `remember` | Store an observation, decision, or learning |
| `recall` | Semantic search across all memories |
| `proactive_context` | Get context relevant to current work |
| `forget` | Suppress or correct a memory |
| `context_summary` | Overview of recent learnings and decisions |
| `add_todo` | Track work across sessions (GTD system) |
| `list_todos` | See pending tasks |
| `complete_todo` | Mark work as done |
| `set_reminder` | Time-based or context-triggered reminders |
| `memory_stats` | See how your memory system is performing |

The full MCP server exposes 45 tools covering memory, todos, projects, reminders, and system health.

Performance Impact

shodh-memory runs as a separate process. It does not slow down your IDE or your AI assistant.

**Memory usage:** ~200MB baseline (models + runtime)
**CPU impact:** Negligible between operations
**Disk usage:** ~2-5KB per memory, scales linearly
**Latency added:** <1ms for writes, 34-58ms for semantic search (runs in background, doesn't block the assistant)

On a typical development machine, you won't notice it's running.

Troubleshooting

**"MCP server not found"** — Make sure the npm package is installed globally: `npm install -g @shodh/memory-mcp`

**"Connection refused"** — shodh-memory starts on port 3030 by default. Check if another process is using that port.

**"No memories found"** — Memory needs to accumulate. Use the assistant for a few sessions before expecting rich context retrieval.

**"High memory usage"** — The embedding model (MiniLM-L6-v2, ~22MB) loads on first use. This is a one-time cost and stays resident for fast inference.

Privacy Note

Everything runs on your machine. shodh-memory makes zero network requests. Your memories, embeddings, knowledge graph, and todos are stored in a local RocksDB database in your home directory. No telemetry. No cloud sync. No API keys required.

Your AI assistant's memory is your data, on your hardware, under your control.

$ subscribe

Get updates on releases, features, and AI memory research.