Back to Arena
Selfmem
by Tsinghua / Microsoft (Cheng et al.)
System Card
OrganizationTsinghua / Microsoft (Cheng et al.)
Released2023-05
Architecturevector-rag / Iterative self-retrieval (generator feeds its own memory)
DetailsRetrieval-augmented generator iteratively produces outputs that a memory selector then chooses as retrieval targets for the next round, forming an unbounded self-memory pool rather than a fixed corpus.
Parameters—
Domainrag-retrieval
Open SourceYes
PaperView Paper
CodeRepository
neurips-2023self-retrievaltranslationsummarization
Capability Profile
Benchmark Scores
5 of 14 benchmarksMulti-Turn Recall0/2
LoCoMo
no dataMemoryBank
no dataCross-Session Memory0/1
LongMemEval
no dataMulti-Hop QA2/3
Agent Task Memory0/1
AgentBench-Mem
no dataPersonalization0/1
PerLTQA
no dataFactuality / Grounding1/1
Sources:Selfmem paper (arXiv:2305.02437); evaluated on HotpotQA: A Dataset for Diverse, Explainable Multi-hop Question Answering (Stanford / CMU, 1809)Selfmem paper (arXiv:2305.02437); evaluated on LongBench: A Bilingual, Multitask Benchmark for Long Context Understanding (Tsinghua KEG, 2308)Selfmem paper (arXiv:2305.02437); evaluated on MultiHop-RAG: Benchmarking Retrieval-Augmented Generation for Multi-Hop Queries (HKUST, 2401)Selfmem paper (arXiv:2305.02437); evaluated on RAGAS: Automated Evaluation of Retrieval-Augmented Generation (Exploding Gradients, 2309)Selfmem paper (arXiv:2305.02437); evaluated on RULER: What's the Real Context Size of Your Long-Context Language Models (NVIDIA, 2404)