← Back to blog
2025-12-2810 min read

The Three-Tier Memory Architecture: From Cowan to Code

architectureneurosciencedesign
three-tier-memory-architecture.md

The Three-Tier Memory Architecture

Human memory isn't a single system—it's layers of systems with different capacities, decay rates, and access patterns. Shodh-memory implements this insight.

Cowan's Embedded-Processes Model

Psychologist Nelson Cowan proposed that working memory isn't separate from long-term memory—it's an activated subset of it. His model has three components:

1. **Sensory Buffer**: Raw input, ~7 items, decays in seconds

2. **Focus of Attention**: Active processing, ~4 chunks, decays in minutes

3. **Activated Long-Term Memory**: Primed memories, unlimited, decays over hours/days

Traditional AI memory systems ignore this. They dump everything into a vector store and hope for the best.

Our Implementation

Tier 1: Sensory Buffer

```rust

pub struct SensoryBuffer {

items: RingBuffer<RawInput, 7>,

ttl: Duration::from_secs(30),

}

```

Raw observations enter here first. Most are discarded. Only salient inputs (determined by novelty detection) graduate to working memory.

Tier 2: Working Memory

```rust

pub struct WorkingMemory {

focus: BoundedVec<Chunk, 4>,

associations: HashMap<ChunkId, Vec<LtmId>>,

decay_rate: f32, // Minutes

}

```

Working memory maintains the current context. When you ask "What was I working on?", this is what answers. It holds ~4 chunks but each chunk can reference many long-term memories.

Tier 3: Long-Term Memory

```rust

pub struct LongTermMemory {

episodic: VectorIndex, // What happened

semantic: KnowledgeGraph, // What it means

decay: HybridDecay, // Exponential + power-law

}

```

Long-term memory is where meaning lives. It combines episodic memories (events) with semantic knowledge (relationships).

Information Flow

```

Input → Sensory Buffer → (filter) → Working Memory → (consolidate) → LTM

↑ ↓

└──── (retrieve) ──────────┘

```

The key insight: retrieval from LTM into working memory is where Hebbian learning happens. Accessed memories strengthen; ignored memories decay.

Why This Matters

Single-tier memory systems have no notion of relevance. Everything is equally accessible, which means everything competes for attention. Our tiered approach means:

Recent context is always fast (working memory)
Important patterns persist (LTM with strengthening)
Noise fades naturally (decay)

This matches human memory because it's modeled on human memory.