Back to Arena
MoT
by Fudan University (Li & Qiu)
System Card
OrganizationFudan University (Li & Qiu)
Released2023-05
Architectureagentic-workflow / Pre-thought externalized memory
DetailsTwo-stage framework: pre-test stage LLM pre-thinks on unlabeled data and saves high-confidence thoughts as external memory; at test time, recalls relevant memory to assist reasoning. No labeled data or parameter updates.
Parameters—
Domainagent-memorylifelong-learning
Open SourceYes
PaperView Paper
CodeRepository
self-improvementemnlp-2023cotpre-thinking
Capability Profile
Benchmark Scores
6 of 14 benchmarksLong-Context Retrieval0/5
RULER
no dataNIAH
no dataLooGLE
no dataLongBench
no data∞Bench
no dataMulti-Turn Recall2/2
Cross-Session Memory1/1
Agent Task Memory1/1
Personalization0/1
PerLTQA
no dataFactuality / Grounding0/1
RAGAS
no dataSources:MoT paper (arXiv:2305.05181); evaluated on LoCoMo: Long-Term Conversational Memory Benchmark (Snap Research, 2402)MoT paper (arXiv:2305.05181); evaluated on LongMemEval: Benchmarking Chat Assistants on Long-Term Interactive Memory (Salesforce AI Research, 2410)MoT paper (arXiv:2305.05181); evaluated on AgentBench Memory Track (Tsinghua KEG, 2308)MoT paper (arXiv:2305.05181); evaluated on BABILong: Testing the Limits of LLMs with Long-Context Reasoning-in-a-Haystack (AIRI, 2406)MoT paper (arXiv:2305.05181); evaluated on HotpotQA: A Dataset for Diverse, Explainable Multi-hop Question Answering (Stanford / CMU, 1809)MoT paper (arXiv:2305.05181); evaluated on MemoryBank: Enhancing LLMs with Long-Term Memory (Sun Yat-sen University, 2305)