Back to Arena
R3Mem
by HKUST (2025)
System Card
OrganizationHKUST (2025)
Released2025-02
Architecturekv-cache-extension / Reversible hierarchical context compression
DetailsVirtual memory tokens compress text hierarchically at document/paragraph/entity levels. A reversible architecture (adapter-tuned on a frozen Transformer) reconstructs raw input by inverting the compression for retrieval.
Parameters—
Domainlong-contextagent-memory
Open SourceNo
PaperView Paper
acl-2025reversiblevirtual-tokenshierarchical
Capability Profile
Benchmark Scores
6 of 14 benchmarksMulti-Turn Recall1/2
MemoryBank
no dataCross-Session Memory0/1
LongMemEval
no dataAgent Task Memory1/1
Personalization0/1
PerLTQA
no dataFactuality / Grounding0/1
RAGAS
no dataSources:R3Mem paper (arXiv:2502.15957); evaluated on AgentBench Memory Track (Tsinghua KEG, 2308)R3Mem paper (arXiv:2502.15957); evaluated on BABILong: Testing the Limits of LLMs with Long-Context Reasoning-in-a-Haystack (AIRI, 2406)R3Mem paper (arXiv:2502.15957); evaluated on HotpotQA: A Dataset for Diverse, Explainable Multi-hop Question Answering (Stanford / CMU, 1809)R3Mem paper (arXiv:2502.15957); evaluated on InfiniteBench: Extending Long Context Evaluation Beyond 100K Tokens (Tsinghua / OpenBMB, 2402)R3Mem paper (arXiv:2502.15957); evaluated on LoCoMo: Long-Term Conversational Memory Benchmark (Snap Research, 2402)R3Mem paper (arXiv:2502.15957); evaluated on LongBench: A Bilingual, Multitask Benchmark for Long Context Understanding (Tsinghua KEG, 2308)