← Back to blog
2026-02-1310 min read

Giving OpenAI Agents Cognitive Memory: Shodh-Memory + Agents SDK

agentic-aiintegrationcognition
openai-agents-sdk-cognitive-memory.md

Giving OpenAI Agents Cognitive Memory: Shodh-Memory + Agents SDK

OpenAI's Agents SDK launched with a compelling premise: give LLMs tools, handoffs, and guardrails — then let them reason through multi-step tasks autonomously. It's the cleanest framework for building production agents in Python.

But there's a gap. Agents built with the SDK are stateless by default. Every run starts fresh. The agent that helped you debug a Kubernetes issue yesterday has zero memory of it today. Your multi-agent pipeline re-discovers the same project context on every invocation.

This isn't a limitation of the SDK — it's a missing layer. The SDK provides a Session protocol for persistence and a FunctionTool interface for capabilities. What it needs is a memory system that actually thinks.

That's what shodh-memory provides.

What Makes This Different from a Database

You could store conversation history in SQLite. You could dump embeddings into Pinecone. But that's storage, not cognition.

Shodh-memory implements biological memory dynamics:

Memories strengthen with repeated access (Hebbian learning — neurons that fire together wire together)
Unused memories decay naturally following Ebbinghaus forgetting curves (exponential for recent, power-law for older)
Related concepts activate each other through spreading activation in a knowledge graph
Memories promote through tiers: working → session → long-term, just like Cowan's embedded-processes model
Frequently-accessed knowledge undergoes Long-Term Potentiation and becomes permanent

This means your agent doesn't just "store and retrieve." It learns. Context that matters gets stronger. Noise fades. Connections form between related experiences.

Two Integration Points

The shodh-memory adapter for OpenAI Agents SDK provides two components:

1. ShodhSession — Automatic Conversation Memory

The SDK's Session protocol lets you persist conversation state across runs. ShodhSession implements this with cognitive memory instead of flat storage:

```python

from shodh_memory.integrations.openai_agents import ShodhSession

from agents import Agent, Runner

session = ShodhSession(

session_id="project-alpha",

user_id="engineer-1",

api_key="your-key",

)

agent = Agent(

name="Assistant",

instructions="You are a helpful engineering assistant.",

)

First conversation

result = await Runner.run(

agent,

"We're migrating from PostgreSQL to CockroachDB.",

session=session,

)

Hours later — agent remembers the migration context

result = await Runner.run(

agent,

"What should I watch out for with distributed transactions?",

session=session,

)

```

Every conversation turn is encoded as a memory. When the agent runs again, previous context is retrieved through semantic search — not just replayed sequentially. The agent recalls what's relevant to the current query, not just the last N messages.

Sessions are isolated by session_id. Different projects, different conversations, no bleed-through.

2. ShodhTools — Explicit Memory Operations

Sometimes agents need direct control over memory. ShodhTools provides 8 FunctionTool instances that agents can invoke:

```python

from shodh_memory.integrations.openai_agents import ShodhTools

from agents import Agent, Runner

tools = ShodhTools(

user_id="engineer-1",

api_key="your-key",

)

agent = Agent(

name="Memory Agent",

instructions=(

"You have persistent cognitive memory. "

"Use shodh_remember to store important facts, "

"shodh_recall to search your memory, "

"and shodh_add_todo for task tracking."

),

tools=tools.as_list(),

)

result = await Runner.run(

agent,

"Remember that the auth service uses JWT with RS256, "

"and add a todo to rotate the signing keys next sprint.",

)

```

The agent decides what to remember and when to recall. The LLM handles reasoning; shodh-memory handles persistence.

| Tool | What It Does |
|------|-------------|
| shodh_remember | Store a memory with type (Decision, Learning, Error, etc.) and tags |
| shodh_recall | Semantic search across all memories — hybrid vector + keyword |
| shodh_forget | Remove a memory (corrections, outdated facts) |
| shodh_context_summary | Get a condensed view of recent learnings and decisions |
| shodh_proactive_context | Surface memories relevant to the current conversation |
| shodh_add_todo | Create a task with project, priority, and due date |
| shodh_list_todos | List tasks filtered by status or project |
| shodh_complete_todo | Mark a task done (recurring tasks auto-create the next occurrence) |

Architecture: What Happens Under the Hood

When an agent calls shodh_remember, here's what actually happens:

```

Agent calls shodh_remember("Auth uses JWT RS256")

→ HTTP POST to shodh-memory server (localhost:3030)

→ Content embedded via MiniLM-L6-v2 (384-dim, ONNX, <5ms)

→ Stored in RocksDB with MessagePack serialization

→ Indexed in Vamana graph for approximate nearest neighbor search

→ Entities extracted ("JWT", "RS256", "auth service")

→ Knowledge graph updated — edges form between related entities

→ Hebbian weights adjusted for co-accessed memories

```

When the agent later calls shodh_recall("authentication setup"):

```

Agent calls shodh_recall("authentication setup")

→ Query embedded via same MiniLM model

→ Multi-stage retrieval: vector search → BM25 rerank → temporal boost → graph expansion

→ Spreading activation surfaces related memories (JWT → signing keys → rotation schedule)

→ Accessed memories get a Hebbian strength boost (+0.025)

→ Results returned ranked by composite relevance score

```

The key insight: every recall makes those memories stronger. The more your agent uses a piece of knowledge, the harder it becomes to forget. This is biological learning applied to AI systems.

Why Not Just Use SQLite or Redis?

You could implement the Session protocol with SQLite. Store messages, retrieve them in order. It works.

But you'd get flat storage. No semantic search — just sequential replay. No decay — your session grows unbounded. No learning — accessing a memory doesn't make it stronger. No graph — related concepts don't activate each other.

| Feature | SQLite/Redis | Shodh-Memory |
|---------|-------------|--------------|
| Storage | Sequential messages | Semantic memories |
| Retrieval | By index/key | By meaning (hybrid vector + keyword) |
| Decay | Manual cleanup | Automatic (Ebbinghaus curves) |
| Learning | None | Hebbian (strengthens with use) |
| Context | Last N messages | Relevant memories from any point |
| Connections | None | Knowledge graph with spreading activation |
| Task tracking | Not built-in | Full GTD system |

The difference is between a filing cabinet and a brain.

Running It

Shodh-memory runs as a single binary — no Docker, no cloud, no API keys to manage:

```

Install

pip install shodh-memory[openai-agents]

Start the server (downloads MiniLM model on first run)

shodh-memory serve

Or download the binary directly

https://github.com/varun29ankuS/shodh-memory/releases

```

Everything runs locally. Your agent's memories never leave your machine. Embeddings are computed locally via ONNX Runtime. Storage is local RocksDB. No cloud dependency whatsoever.

Combining Both: Session + Tools

The most powerful pattern combines automatic session memory with explicit tool control:

```python

from shodh_memory.integrations.openai_agents import ShodhSession, ShodhTools

from agents import Agent, Runner

session = ShodhSession(

session_id="project-alpha",

user_id="engineer-1",

)

tools = ShodhTools(user_id="engineer-1")

agent = Agent(

name="Project Assistant",

instructions=(

"You have persistent cognitive memory. "

"Conversation history is automatic. "

"Use shodh_remember for important decisions, "

"shodh_recall to search past context, "

"and shodh_add_todo to track work items."

),

tools=tools.as_list(),

)

Session handles conversation continuity

Tools handle explicit memory operations

result = await Runner.run(

agent,

"What did we decide about the caching strategy?",

session=session,

)

```

The session provides continuity — the agent knows what you discussed before. The tools provide agency — the agent can explicitly remember decisions, recall past context, and track tasks.

Multi-Agent Memory

Different agents can share a memory pool by using the same user_id, or maintain isolated memories with different user_ids. This enables patterns like:

A research agent stores findings → an implementation agent recalls them
A code review agent remembers past issues → a writing agent avoids them
Multiple specialized agents contribute to a shared project knowledge base

The knowledge graph connects insights across agents. When one agent stores "prefer Rust for performance-critical paths" and another stores "the hot loop is in the parser," a recall query about "parser optimization" surfaces both — because the graph links performance → Rust and parser → hot loop.

What This Means for Agent Builders

The OpenAI Agents SDK gives you the scaffolding for capable agents. But capability without memory is amnesia with good reasoning.

Adding shodh-memory takes five lines of code and gives your agent:

Cross-session continuity that actually understands context
Memories that strengthen with use and decay when irrelevant
A knowledge graph that connects concepts automatically
Task management that persists across conversations
Complete privacy — everything stays on your hardware

The difference between an agent that's useful once and an agent that gets better over time is memory. Not storage. Cognition.

---

*shodh-memory is open-source (Apache 2.0) and available at github.com/varun29ankuS/shodh-memory. Install the OpenAI Agents adapter with: pip install shodh-memory[openai-agents]*