Back to Arena
MemoRAG
by BAAI / Qhjqhj00
System Card
OrganizationBAAI / Qhjqhj00
Released2024-09
Architectureexternal-memory-network / Long-context "memory model" + retriever + generator
DetailsA 1M-token memory LM (Qwen2/Mistral fine-tunes) builds global understanding of a corpus, generates query-specific clues, which guide a bge-m3 retriever. The generator (memory LM itself or external) produces final answers using memory-inspired knowledge discovery.
Parameters—
Domainrag-retrievallong-context
Open SourceYes
PaperView Paper
CodeRepository
memory-model1m-tokensbge-m3theweb-conf-2025
Capability Profile
Benchmark Scores
3 of 14 benchmarksMulti-Turn Recall0/2
LoCoMo
no dataMemoryBank
no dataCross-Session Memory0/1
LongMemEval
no dataMulti-Hop QA1/3
Agent Task Memory0/1
AgentBench-Mem
no dataPersonalization0/1
PerLTQA
no dataFactuality / Grounding0/1
RAGAS
no dataSources:arXiv:2409.05591 Table 1 — Mistral-7B-v0.2-32K memory + Phi-3-mini-128K generator; avg of NarrativeQA 27.5, Qasper 43.9, MultiFieldQA 52.2, MuSiQue 33.9, 2WikiMQA 54.1, HotpotQA 54.8arXiv:2409.05591 Table 1 — MemoRAG on HotpotQA (via LongBench)arXiv:2409.05591 Table 1 — avg of MultiNews 26.3, GovReport 32.9, En.SUM 15.7, En.QA 22.9