[compare]

How shodh-memory compares

Choosing a memory system for your AI agents is a critical architectural decision. Here is an honest, feature-by-feature comparison to help you make an informed choice.

comparison-matrix.tsv
Featureshodhmem0ZepCogneeLetta
ArchitectureRust nativePythonGoPythonPython
Local-first (no cloud required)~~~
Single binary, zero dependencies
Hebbian learning (memory strengthens with use)
Knowledge graph built-in
3-tier memory (working→session→long-term)
Memory decay (forgetting curves)
MCP server included
Edge/embedded device support
Sub-millisecond entity lookup
Offline operation~
Open source✓ Apache 2.0✓ Apache 2.0✓ MIT✓ Apache 2.0✓ Apache 2.0
Self-hosted embeddings (no API keys)
GTD todo system
Causal lineage tracking
= fully supported   ~ = partial / limited    = not supported

Detailed Comparisons

shodh-vs-mem0.md

shodh-memory vs mem0

mem0 is a popular Python-based memory layer for AI applications. It provides a straightforward API for storing and retrieving memories and recently added graph-based memory features. However, mem0 relies on external services for embeddings (OpenAI, Cohere) and vector storage (Qdrant, Pinecone), meaning your data leaves your infrastructure and you accumulate API costs with every operation.

shodh-memory takes a fundamentally different approach. Written in Rust with zero external dependencies, it bundles its own embedding model (MiniLM-L6-v2 via ONNX Runtime), its own vector index (Vamana/SPANN), and its own knowledge graph with Hebbian learning. There are no API keys to manage, no cloud services to configure, and no data leaving your machine. A single binary handles everything.

Beyond architecture, shodh-memory implements cognitive features that mem0 does not offer: biologically-inspired memory decay based on Wixted's forgetting curves, three-tier memory promotion (working to session to long-term), causal lineage tracking across memories, and a built-in GTD task system. These features make shodh-memory behave less like a database and more like an actual memory system.

shodh-vs-zep.md

shodh-memory vs Zep

Zep is a cloud-first memory service written in Go with a managed platform. It offers knowledge graphs, temporal awareness, and a polished developer experience. However, Zep requires a cloud account, sends data to external servers, and operates as a SaaS dependency in your stack. Self-hosting is possible but requires significant infrastructure (PostgreSQL, Redis, embedding services).

shodh-memory is the opposite: local-first, single binary, zero infrastructure. Where Zep needs a cloud connection or a multi-service deployment, shodh-memory runs on a Raspberry Pi with no network at all. Both offer knowledge graphs, but shodh-memory's graph uses Hebbian learning with synaptic plasticity—edges strengthen when accessed together and decay when unused, mimicking how biological memory works.

For teams that need air-gapped deployments, edge devices, or simply want to avoid cloud vendor lock-in, shodh-memory provides capabilities that Zep's architecture cannot support. The trade-off is that Zep offers a managed service with less operational overhead for cloud-native teams, while shodh-memory gives you complete sovereignty over your data and infrastructure.

shodh-vs-cognee.md

shodh-memory vs Cognee

Cognee focuses on building knowledge graphs from unstructured data using LLM-powered extraction pipelines. It excels at turning documents into structured knowledge and offers good integration with graph databases like Neo4j. However, Cognee depends on external LLM APIs for entity extraction and relationship inference, making it expensive at scale and impossible to run offline.

shodh-memory builds its knowledge graph using local NER (Named Entity Recognition) with rule-based extraction, requiring no external API calls. While this means less sophisticated extraction than Cognee's LLM-powered approach, it enables true offline operation and deterministic performance. shodh-memory's graph also implements spreading activation and Hebbian learning—features designed for runtime memory retrieval rather than static knowledge storage.

The key distinction is purpose: Cognee is a knowledge engineering tool that builds graphs from documents, while shodh-memory is a cognitive memory system that builds graphs from lived experience. If you need to process a corpus of documents into a knowledge base, Cognee is strong. If you need an AI agent that remembers, learns, and forgets like a brain, shodh-memory is built for that.

shodh-vs-letta.md

shodh-memory vs Letta

Letta (formerly MemGPT) pioneered the concept of virtual context management—using an LLM to decide what to page in and out of its own context window. It is an agent framework first and a memory system second, providing tool-use, multi-agent orchestration, and memory as part of a broader platform. Its memory relies on external vector databases and LLM calls for management decisions.

shodh-memory is a dedicated memory system, not an agent framework. It does one thing deeply: provide biologically-inspired persistent memory with sub-millisecond retrieval. Where Letta uses LLM calls to manage memory (expensive, slow, non-deterministic), shodh-memory uses algorithmic approaches—decay curves, Hebbian learning, spreading activation—that are fast, predictable, and free.

The two can be complementary: Letta as the agent orchestration layer, shodh-memory as the memory backend. But for teams that already have an agent framework and just need memory, shodh-memory provides a more focused, performant, and cost-effective solution without the overhead of an entire agent platform.

Ready to give your AI agents real memory?