[blog]

Engineering Memory

Where cognitive science meets systems engineering.

Deep-dives into how memory actually works — in brains and in machines. Hebbian learning, forgetting curves, spreading activation, knowledge graphs, and the engineering required to make AI agents that genuinely remember.

Cognition & Neuroscience

How the brain remembers — and how we translated decades of cognitive science into code

What Is AI Agent Memory? Beyond Chat History and RAG

AI agents are everywhere — but most forget everything between runs. What is agent memory, how does it differ from RAG, and what does a real memory system look like?

2026-02-17|10 min

What Is AI Memory? A Technical Guide for 2026

AI memory is not a database. It is not RAG. It is not context windows. This guide explains what AI memory actually is — and why every serious AI system will need one.

2026-02-16|9 min

Cognitive Memory: The Missing Piece in the AI Arms Race

Every lab is racing on reasoning, planning, and tool use. But memory — the ability to learn from experience — is the capability nobody is shipping. That is about to change.

2026-02-14|10 min

Giving OpenAI Agents Cognitive Memory: Shodh-Memory + Agents SDK

OpenAI's Agents SDK gives agents tools and handoffs. Shodh-memory gives them cognition — memory that strengthens with use, decays naturally, and surfaces context before you ask.

2026-02-13|10 min

Building AI Agents That Actually Learn: Beyond Prompt Engineering

Most AI agents are stateless wrappers around LLMs. They don't learn — they restart. Here's what it takes to build agents with genuine cognitive continuity.

2026-02-10|9 min

Vector Search Beyond Cosine Similarity: Graph-Based Approaches

Cosine similarity is chapter one. Graph-based search (Vamana, DiskANN) is chapter two. How shodh-memory auto-scales from HNSW to SPANN at 100K vectors.

2026-02-02|10 min

Cognitive Architecture for AI Systems: What Neuroscience Actually Teaches Us

Baddeley's working memory. Cowan's embedded processes. Ebbinghaus forgetting curves. Hebb's rule. How each maps to engineering decisions in a real system.

2026-01-28|11 min

Memory in Multi-Agent Systems: When AI Agents Share a Brain

When multiple agents share memory, you get emergent coordination. When they don't, you get chaos. The architecture of shared cognitive systems.

2026-01-18|8 min

The Memory Layer: Why Every AI System Will Have One by 2027

Compute has a layer. Storage has a layer. Networking has a layer. Memory is the missing layer in AI infrastructure — and it's arriving now.

2026-01-12|9 min

Hebbian Learning for AI Agents: Neurons That Fire Together Wire Together

How we implemented biological learning principles in shodh-memory. When memories are accessed together, their connection strengthens—just like synapses in the brain.

2026-01-03|8 min

The Three-Tier Memory Architecture: From Cowan to Code

Deep dive into our sensory buffer, working memory, and long-term memory tiers. Based on Nelson Cowan's embedded-processes model.

2025-12-28|10 min

Memory Decay and Forgetting Curves: The Math Behind Remembering

Ebbinghaus showed us forgetting is predictable. We implement hybrid exponential + power-law decay for realistic memory behavior.

2025-12-22|9 min

Knowledge Graphs and Spreading Activation: How Context Surfaces

When you access one memory, related concepts activate too. We implement spreading activation for proactive context retrieval.

2025-12-18|11 min

Long-Term Potentiation in Code: Making Memories Permanent

In the brain, repeated activation makes synapses permanent. We implement LTP so frequently-used knowledge resists decay.

2025-12-08|7 min

Architecture & Engineering

System design decisions, storage engines, and the infrastructure of memory

Why Robotics Still Doesn't Have Memory (And How to Fix It)

Robots can see, grasp, and navigate. But they can't remember what they learned yesterday. The robotics memory gap is real — and solving it requires rethinking how memory works.

2026-02-13|9 min

MCP: The Protocol That Will Define How AI Tools Communicate

Model Context Protocol is to AI tools what HTTP was to the web. A deep dive into the protocol, its design, and why memory is its killer app.

2026-02-08|8 min

Why We Chose Rust for AI Infrastructure (And When You Shouldn't)

An honest take on Rust for AI systems. The wins: memory safety, zero-cost abstractions, cross-compilation to ARM. The costs: iteration speed, ecosystem gaps.

2026-02-05|8 min

RocksDB for AI Workloads: Lessons from Building a Memory Engine

Why we chose RocksDB over SQLite and Postgres. Column families, prefix iterators, write-ahead logging, and the compaction strategies that actually matter.

2026-01-22|9 min

Running Embedding Models on Edge Devices: ONNX, Quantization, and Reality

Getting MiniLM-L6-v2 to run on a Raspberry Pi at 34ms per embedding. ONNX Runtime, model quantization, batch processing, and the circuit breaker that saved us.

2026-01-15|10 min

RAG Is Not Memory: Why Your AI Still Has Amnesia

Everyone thinks RAG solves the memory problem. It doesn't. Retrieval is not remembering. Here's the difference—and why it matters.

2026-01-10|8 min

Memory Architecture for Autonomous Agents: Why Your AI Needs a Brain, Not a Database

Autonomous agents are everywhere—coding assistants, research bots, robotic systems. But most are goldfish. Here's how to give your agent a real brain.

2026-01-07|10 min

Why Vector Search Alone Isn't Enough for Agent Memory

Vector similarity is great, but agents need more. We explain why shodh-memory combines vectors with knowledge graphs and temporal indices.

2025-12-25|7 min

Benchmarking AI Memory Systems: Latency, Accuracy, and Scale

How does shodh-memory compare to alternatives? We share our benchmarking methodology and results across key metrics.

2025-12-15|8 min

AI Agents & MCP

Building agents that remember, learn, and coordinate through shared cognitive systems

Edge & Robotics

Running cognitive memory on Raspberry Pis, robots, and air-gapped systems

31 posts on cognition, memory architecture, and AI systems