Back to Arena
MemoChat
by University of Warwick / Alibaba
System Card
OrganizationUniversity of Warwick / Alibaba
Released2023-08
Architecturehierarchical-summary / Memo-based iterative memorization-retrieval-response
DetailsTunes LLMs to maintain self-composed memos in an iterative "memorize-retrieve-respond" cycle. Instructions are reconstructed from public datasets so the model learns to write and query structured memos for long-range consistency.
Parameters—
Domainepisodic-sessionagent-memory
Open SourceYes
PaperView Paper
CodeRepository
instruction-tuningmemosfine-tunedlong-range
Capability Profile
Benchmark Scores
6 of 14 benchmarksLong-Context Retrieval0/5
RULER
no dataNIAH
no dataLooGLE
no dataLongBench
no data∞Bench
no dataMulti-Turn Recall2/2
Cross-Session Memory1/1
Multi-Hop QA2/3
Agent Task Memory1/1
Personalization0/1
PerLTQA
no dataFactuality / Grounding0/1
RAGAS
no dataSources:MemoChat paper (arXiv:2308.08239); evaluated on AgentBench Memory Track (Tsinghua KEG, 2308)MemoChat paper (arXiv:2308.08239); evaluated on LoCoMo: Long-Term Conversational Memory Benchmark (Snap Research, 2402)MemoChat paper (arXiv:2308.08239); evaluated on LongMemEval: Benchmarking Chat Assistants on Long-Term Interactive Memory (Salesforce AI Research, 2410)MemoChat paper (arXiv:2308.08239); evaluated on HotpotQA: A Dataset for Diverse, Explainable Multi-hop Question Answering (Stanford / CMU, 1809)MemoChat paper (arXiv:2308.08239); evaluated on MemoryBank: Enhancing LLMs with Long-Term Memory (Sun Yat-sen University, 2305)MemoChat paper (arXiv:2308.08239); evaluated on MultiHop-RAG: Benchmarking Retrieval-Augmented Generation for Multi-Hop Queries (HKUST, 2401)