← Back to blog
2026-01-038 min read

Hebbian Learning for AI Agents: Neurons That Fire Together Wire Together

neurosciencelearningarchitecture
hebbian-learning-ai-agents.md

Hebbian Learning for AI Agents

Donald Hebb's 1949 insight changed neuroscience forever: "Neurons that fire together wire together." When two neurons repeatedly activate in sequence, their synaptic connection strengthens. This simple principle underlies how biological brains learn.

The Problem with Static Memory

Most AI memory systems treat all memories equally. Store a fact, retrieve it later, done. But this misses a crucial insight: not all memories are equally important.

Consider an AI coding assistant. It might remember:

The user prefers Rust over Python (accessed 50 times)
A one-off debugging session from 3 months ago (accessed once)

Static systems give these equal weight. Hebbian learning doesn't.

How We Implement Hebbian Learning

In shodh-memory, every knowledge graph edge has a `strength` value (0.0 to 1.0). When two memories are accessed together:

```rust

fn strengthen_connection(edge: &mut Edge, activation: f32) {

let delta = LEARNING_RATE * activation * (1.0 - edge.strength);

edge.strength = (edge.strength + delta).min(1.0);

}

```

The key insight: strengthening is proportional to remaining capacity. Weak connections strengthen quickly; strong connections plateau. This matches biological long-term potentiation (LTP).

Decay Without Use

Hebbian learning has a corollary: connections that don't fire weaken. We implement this with hybrid decay:

```rust

fn apply_decay(edge: &mut Edge, hours_since_access: f32) {

// Exponential decay for recent memories

let exp_decay = (-hours_since_access / HALF_LIFE).exp();

// Power-law for older memories (slower decay)

let power_decay = (1.0 + hours_since_access).powf(-0.5);

// Blend based on memory age

edge.strength *= exp_decay * 0.7 + power_decay * 0.3;

}

```

This matches Ebbinghaus's forgetting curve research: rapid initial decay, then a long tail.

Long-Term Potentiation (LTP)

In biology, some synapses become permanent through repeated activation. We implement LTP by marking edges as `permanent` once they cross a threshold:

```rust

if edge.strength > LTP_THRESHOLD && edge.access_count > LTP_MIN_ACCESSES {

edge.permanent = true; // Immune to decay

}

```

This means core knowledge (the user prefers Rust) becomes permanent, while ephemeral context fades naturally.

Results

In our testing, Hebbian learning improved relevant context retrieval by 34% compared to static memory. More importantly, it reduced noise—old, irrelevant memories naturally fade instead of cluttering results.

Memory should work like memory. Shodh-memory makes it so.