← Back to blog
2025-12-1210 min read

Memory for Robots: Learning from the Real World

roboticsedgecase-study
robotics-memory-real-world.md

Memory for Robots: Learning from the Real World

Theory is nice. Here's how shodh-memory works in an actual warehouse robot.

The Setup

Our test subject: a pick-and-place robot in a small fulfillment center:

6-axis arm
Mobile base
RGB-D camera
Running on NVIDIA Jetson Orin

The Problem

Warehouses change. Products move. New SKUs arrive. Lighting shifts. A robot trained once degrades over time.

Traditional approach: retrain periodically. Expensive. Downtime.

Our approach: continuous learning with shodh-memory.

What the Robot Remembers

Object Locations

```rust

// After each successful pick

memory.remember(

format!("SKU {} found at bin {} position {:?}", sku, bin, pos),

tags: ["location", sku, bin]

);

```

Over time, the robot builds a probabilistic map of where items are likely to be.

Grasp Strategies

```rust

// After each grasp attempt

if success {

memory.remember(

format!("Grasp {} worked for {} at angle {}", strategy, sku, angle),

tags: ["grasp", "success", sku]

);

} else {

memory.remember(

format!("Grasp {} failed for {}: {}", strategy, sku, reason),

tags: ["grasp", "failure", sku]

);

}

```

The robot learns which grasp strategies work for which product types.

Environmental Conditions

```rust

// Periodic observations

memory.remember(

format!("Lighting in zone A: {} lux, shadows from {:?}", lux, shadow_dir),

tags: ["environment", "lighting", "zone-a"]

);

```

Lighting affects vision. The robot remembers when shadows are problematic.

Retrieval in Action

When given a pick task:

```rust

fn plan_pick(sku: &str) -> PickPlan {

// Where is this item likely to be?

let locations = memory.recall(

format!("Where is {} located?", sku),

limit: 5

);

// What grasp strategy works?

let grasps = memory.recall(

format!("Successful grasp for {}", sku),

limit: 3

);

// Current environmental conditions

let env = memory.proactive_context(

"current lighting and obstacles"

);

combine_into_plan(locations, grasps, env)

}

```

Results

After 2 weeks of operation:

| Metric | Day 1 | Day 14 |
|--------|-------|--------|
| Pick success rate | 87% | 94% |
| Avg pick time | 4.2s | 3.1s |
| Failed grasp retries | 23% | 8% |
| Path planning time | 120ms | 85ms |

The robot got better at its job without retraining.

Memory Growth

After 2 weeks:

47,000 memories stored
180,000 graph edges
12GB storage used

Decay keeps memory bounded. Old, unused memories fade. Current knowledge stays fresh.

Lessons Learned

1. **Tag everything**: Structured tags enable fast filtering

2. **Remember failures**: Negative examples are as valuable as positive

3. **Temporal context matters**: Recent observations outweigh old ones

4. **Edge deployment works**: Sub-100ms latency on Jetson is achievable