Memory for Robots: Learning from the Real World
Memory for Robots: Learning from the Real World
Theory is nice. Here's how shodh-memory works in an actual warehouse robot.
The Setup
Our test subject: a pick-and-place robot in a small fulfillment center:
The Problem
Warehouses change. Products move. New SKUs arrive. Lighting shifts. A robot trained once degrades over time.
Traditional approach: retrain periodically. Expensive. Downtime.
Our approach: continuous learning with shodh-memory.
What the Robot Remembers
Object Locations
// After each successful pick
memory.remember(
format!("SKU {} found at bin {} position {:?}", sku, bin, pos),
tags: ["location", sku, bin]
);
Over time, the robot builds a probabilistic map of where items are likely to be.
Grasp Strategies
// After each grasp attempt
if success {
memory.remember(
format!("Grasp {} worked for {} at angle {}", strategy, sku, angle),
tags: ["grasp", "success", sku]
);
} else {
memory.remember(
format!("Grasp {} failed for {}: {}", strategy, sku, reason),
tags: ["grasp", "failure", sku]
);
}
The robot learns which grasp strategies work for which product types.
Environmental Conditions
// Periodic observations
memory.remember(
format!("Lighting in zone A: {} lux, shadows from {:?}", lux, shadow_dir),
tags: ["environment", "lighting", "zone-a"]
);
Lighting affects vision. The robot remembers when shadows are problematic.
Retrieval in Action
When given a pick task:
fn plan_pick(sku: &str) -> PickPlan {
// Where is this item likely to be?
let locations = memory.recall(
format!("Where is {} located?", sku),
limit: 5
);
// What grasp strategy works?
let grasps = memory.recall(
format!("Successful grasp for {}", sku),
limit: 3
);
// Current environmental conditions
let env = memory.proactive_context(
"current lighting and obstacles"
);
combine_into_plan(locations, grasps, env)
}
Results
After 2 weeks of operation:
The robot got better at its job without retraining.
Memory Growth
After 2 weeks:
Decay keeps memory bounded. Old, unused memories fade. Current knowledge stays fresh.
Lessons Learned
1. **Tag everything**: Structured tags enable fast filtering
2. **Remember failures**: Negative examples are as valuable as positive
3. **Temporal context matters**: Recent observations outweigh old ones
4. **Edge deployment works**: Sub-100ms latency on Jetson is achievable