[blog]

Engineering Memory

Where cognitive science meets systems engineering.

Deep-dives into how memory actually works — in brains and in machines. Hebbian learning, forgetting curves, spreading activation, knowledge graphs, and the engineering required to make AI agents that genuinely remember.

Cognition & Neuroscience

How the brain remembers — and how we translated decades of cognitive science into code

Graph Databases for AI Memory: Why Your Agent Needs a Knowledge Graph

Vector search finds similar things. Graphs find connected things. Your agent needs both. How graph databases enable causal reasoning, spreading activation, and entity disambiguation in AI memory systems.

|14 min

Cognitive Architectures for AI Agents: From ACT-R to Modern Memory Systems

GPT-4 has no architecture. It has a context window. From SOAR and ACT-R to Cowan's embedded processes model — how cognitive science maps to AI agent memory, and why the 'limited memory AI' problem persists.

|15 min

Types of Memory in AI: From Working Memory to Long-Term Potentiation

The definitive guide to memory types — sensory, working, long-term, episodic, semantic, procedural. How each maps from neuroscience to AI systems, with ASCII diagrams of the full memory hierarchy.

|13 min

What Your AI Doesn't Know It Doesn't Know: Why Memory Needs Diagnostic Introspection

Claude.md files get compacted. Rules degrade to suggestions. Context windows are lossy. The real problem isn't what your AI forgets — it's that it can't tell you what it's forgotten. Here's how a knowledge graph fixes that.

|11 min

Hopfield Networks, the Nobel Prize, and Why AI Memory Was Right All Along

In 1982, John Hopfield showed how networks of neurons can store and recall memories. The field ignored it for 35 years. Then it won a Nobel Prize — and turned out to be the foundation of transformers.

|14 min

Learning Without Backpropagation: Why Local Rules Beat Global Gradients for Memory

Backpropagation powers every modern AI model. But the brain can't do it — and Hinton, the man who popularized it, published an alternative. Here's why local learning rules matter for memory systems.

|12 min

The Great Divorce: How AI Abandoned Neuroscience — And Why It's Coming Back

AI started as brain simulation. Then it optimized for benchmarks instead. The 80-year story of how artificial intelligence and neuroscience split apart — and the signs they're reuniting.

|15 min

Why Your AI Agent's Memory Is Broken — And How Neuroscience Fixes It

Your agent forgets everything between sessions. RAG doesn't help. Vector databases don't help. The fix comes from 75 years of memory research — Hebbian learning, spreading activation, and decay curves that actually work.

|12 min

What Is AI Agent Memory? Beyond Chat History and RAG

AI agents are everywhere — but most forget everything between runs. What is agent memory, how does it differ from RAG, and what does a real memory system look like?

|10 min

What Is AI Memory? A Technical Guide for 2026

AI memory is not a database. It is not RAG. It is not context windows. This guide explains what AI memory actually is — and why every serious AI system will need one.

|9 min

Cognitive Memory: The Missing Piece in the AI Arms Race

Every lab is racing on reasoning, planning, and tool use. But memory — the ability to learn from experience — is the capability nobody is shipping. That is about to change.

|10 min

Giving OpenAI Agents Cognitive Memory: Shodh-Memory + Agents SDK

OpenAI's Agents SDK gives agents tools and handoffs. Shodh-memory gives them cognition — memory that strengthens with use, decays naturally, and surfaces context before you ask.

|10 min

Building AI Agents That Actually Learn: Beyond Prompt Engineering

Most AI agents are stateless wrappers around LLMs. They don't learn — they restart. Here's what it takes to build agents with genuine cognitive continuity.

|9 min

Vector Search Beyond Cosine Similarity: Graph-Based Approaches

Cosine similarity is chapter one. Graph-based search (Vamana, DiskANN) is chapter two. How shodh-memory auto-scales from HNSW to SPANN at 100K vectors.

|10 min

Cognitive Architecture for AI Systems: What Neuroscience Actually Teaches Us

Baddeley's working memory. Cowan's embedded processes. Ebbinghaus forgetting curves. Hebb's rule. How each maps to engineering decisions in a real system.

|11 min

Memory in Multi-Agent Systems: When AI Agents Share a Brain

When multiple agents share memory, you get emergent coordination. When they don't, you get chaos. The architecture of shared cognitive systems.

|8 min

The Memory Layer: Why Every AI System Will Have One by 2027

Compute has a layer. Storage has a layer. Networking has a layer. Memory is the missing layer in AI infrastructure — and it's arriving now.

|9 min

Hebbian Learning for AI Agents: Neurons That Fire Together Wire Together

How we implemented biological learning principles in shodh-memory. When memories are accessed together, their connection strengthens—just like synapses in the brain.

|8 min

The Three-Tier Memory Architecture: From Cowan to Code

Deep dive into our sensory buffer, working memory, and long-term memory tiers. Based on Nelson Cowan's embedded-processes model.

|10 min

Memory Decay and Forgetting Curves: The Math Behind Remembering

Ebbinghaus showed us forgetting is predictable. We implement hybrid exponential + power-law decay for realistic memory behavior.

|9 min

Knowledge Graphs and Spreading Activation: How Context Surfaces

When you access one memory, related concepts activate too. We implement spreading activation for proactive context retrieval.

|11 min

Long-Term Potentiation in Code: Making Memories Permanent

In the brain, repeated activation makes synapses permanent. We implement LTP so frequently-used knowledge resists decay.

|7 min

Architecture & Engineering

System design decisions, storage engines, and the infrastructure of memory

Best AI Agent Frameworks 2026: LangChain, CrewAI, AutoGen, OpenAI Agents SDK Compared

Every framework solves orchestration. None of them solve memory. A head-to-head comparison of LangChain, CrewAI, AutoGen, and OpenAI Agents SDK — what each does well, where each falls short, and why memory is the missing layer.

|13 min

Vector Databases Explained: How Semantic Search Powers AI Memory

Ctrl+F finds exact matches. Vector search finds meaning. How embeddings, cosine similarity, and indexing algorithms work — from brute force to Vamana to SPANN — and why vector search alone is not memory.

|12 min

OpenAI Killed the Assistants API. Here's How to Own Your Memory Layer

The Assistants API shuts down August 2026. Threads, runs, conversation state — all gone. Here's how to migrate to a standalone memory layer you own, with step-by-step code for the Responses API + shodh-memory.

|14 min

shodh-memory vs mem0 vs Zep vs MemGPT: Which AI Agent Memory System Should You Use?

A head-to-head comparison of every major AI agent memory system in 2026. Architecture, privacy, performance, cognitive capabilities — and which one fits your use case.

|14 min

How to Make Your AI Agent Remember Between Sessions

Your AI agent forgets everything when the session ends. Here's how to add persistent memory that actually works — not chat history, not RAG, but real cognitive memory.

|10 min

Why Robotics Still Doesn't Have Memory (And How to Fix It)

Robots can see, grasp, and navigate. But they can't remember what they learned yesterday. The robotics memory gap is real — and solving it requires rethinking how memory works.

|9 min

MCP: The Protocol That Will Define How AI Tools Communicate

Model Context Protocol is to AI tools what HTTP was to the web. A deep dive into the protocol, its design, and why memory is its killer app.

|8 min

Why We Chose Rust for AI Infrastructure (And When You Shouldn't)

An honest take on Rust for AI systems. The wins: memory safety, zero-cost abstractions, cross-compilation to ARM. The costs: iteration speed, ecosystem gaps.

|8 min

RocksDB for AI Workloads: Lessons from Building a Memory Engine

Why we chose RocksDB over SQLite and Postgres. Column families, prefix iterators, write-ahead logging, and the compaction strategies that actually matter.

|9 min

Running Embedding Models on Edge Devices: ONNX, Quantization, and Reality

Getting MiniLM-L6-v2 to run on a Raspberry Pi at 34ms per embedding. ONNX Runtime, model quantization, batch processing, and the circuit breaker that saved us.

|10 min

RAG Is Not Memory: Why Your AI Still Has Amnesia

Everyone thinks RAG solves the memory problem. It doesn't. Retrieval is not remembering. Here's the difference—and why it matters.

|8 min

Memory Architecture for Autonomous Agents: Why Your AI Needs a Brain, Not a Database

Autonomous agents are everywhere—coding assistants, research bots, robotic systems. But most are goldfish. Here's how to give your agent a real brain.

|10 min

Why Vector Search Alone Isn't Enough for Agent Memory

Vector similarity is great, but agents need more. We explain why shodh-memory combines vectors with knowledge graphs and temporal indices.

|7 min

Benchmarking AI Memory Systems: Latency, Accuracy, and Scale

How does shodh-memory compare to alternatives? We share our benchmarking methodology and results across key metrics.

|8 min

AI Agents & MCP

Building agents that remember, learn, and coordinate through shared cognitive systems

AI Model Pricing Guide 2026: Claude, GPT-4.1, Grok, Gemini, DeepSeek Compared

Every AI model's API pricing in one place — Claude, GPT-4.1, Gemini 2.5, Grok 3, DeepSeek V3. Input/output costs, context windows, subscription tiers, and the hidden cost nobody talks about: re-explaining context every session.

|15 min

ChatGPT Memory Is Full? Here's Unlimited AI Memory That Never Fills Up

ChatGPT memory fills up in a day and forces you to manually delete facts. Here's an open-source alternative with no storage cap — cognitive memory that strengthens with use, decays naturally, and runs 100% offline.

|12 min

How to Add Real Memory to a LangChain Agent (Beyond ConversationBufferMemory)

LangChain's built-in memory is a chat history buffer. Here's how to give your LangChain agent persistent, cognitive memory that strengthens with use and decays naturally.

|11 min

How to Set Up Persistent Memory for Claude Code and Cursor

Claude Code and Cursor are powerful — but they forget everything between sessions. Here's how to give them persistent memory in under 5 minutes using shodh-memory's MCP server.

|9 min

Why Your Coding Assistant Forgets Everything: Fixing AI Memory for Developers

You've explained your project structure 47 times this month. Your AI assistant has the memory of a goldfish. Here's how to fix that.

|7 min

The Agentic Shift: Why 2026 Is the Year AI Stops Waiting for Prompts

We're witnessing the biggest shift in AI since ChatGPT. Agents that act, remember, and learn—not chatbots that wait. Here's what's actually changing.

|9 min

Integrating shodh-memory with Claude Code and Cursor via MCP

Complete guide to adding persistent memory to your AI coding assistant. One command to remember everything across sessions.

|6 min

Edge & Robotics

Running cognitive memory on Raspberry Pis, robots, and air-gapped systems

48 posts on cognition, memory architecture, and AI systems