Back to Arena
memU
by NevaMind-AI
System Card
OrganizationNevaMind-AI
Released2025-01
Architecturehierarchical-summary / Filesystem-style (Resources / Items / Categories)
DetailsHierarchical memory for 24/7 proactive agents. Continuously captures user intent in the background, auto-extracts facts, and organizes them into a three-layer file-system metaphor with pgvector and in-memory backends.
Parameters—
Domainagent-memorypersonalizationlifelong-learning
Open SourceYes
WebsiteVisit
CodeRepository
proactive-agenthierarchicalalways-onpgvectorcontext-cache
Capability Profile
Benchmark Scores
6 of 14 benchmarksLong-Context Retrieval0/5
RULER
no dataNIAH
no dataLooGLE
no dataLongBench
no data∞Bench
no dataMulti-Turn Recall2/2
Cross-Session Memory1/1
Multi-Hop QA1/3
Agent Task Memory1/1
Personalization1/1
Factuality / Grounding0/1
RAGAS
no dataSources:memu.pro/benchmark + github.com/NevaMind-AI/memU README — Self-reported hybrid retrieval (semantic + keyword + contextual)third-party launch coverage (X/Twitter) — LongMemEval-S; weaker sourcing, not on official pagememU (NevaMind-AI/memU); evaluated on MemoryBank: Enhancing LLMs with Long-Term Memory (Sun Yat-sen University, 2305)memU (NevaMind-AI/memU); evaluated on PerLTQA: A Personal Long-Term Memory Question Answering Dataset (PolyU, 2402)memU (NevaMind-AI/memU); evaluated on AgentBench Memory Track (Tsinghua KEG, 2308)memU (NevaMind-AI/memU); evaluated on BABILong: Testing the Limits of LLMs with Long-Context Reasoning-in-a-Haystack (AIRI, 2406)