Back to Arena
Mem0
by Mem0
System Card
OrganizationMem0
Released2024-03
Architecturehybrid / Vector + LLM-extracted facts
DetailsTwo-phase pipeline: an LLM distills each turn into structured fact memories, which are stored in a vector index alongside raw episodes. Retrieval merges semantic similarity with rule-based filters and decay.
Parameters—
Domainagent-memorypersonalizationepisodic-session
Open SourceYes
PaperView Paper
WebsiteVisit
CodeRepository
llm-extractedvectorpersonalizationagentopen-source
Capability Profile
Benchmark Scores
3 of 14 benchmarksLong-Context Retrieval0/5
RULER
no dataNIAH
no dataLooGLE
no dataLongBench
no data∞Bench
no dataMulti-Turn Recall2/2
Cross-Session Memory1/1
Multi-Hop QA0/3
BABILong
no dataMultiHop-RAG
no dataHotpotQA
no dataAgent Task Memory0/1
AgentBench-Mem
no dataPersonalization0/1
PerLTQA
no dataFactuality / Grounding0/1
RAGAS
no data