Back to Arena
ArcMemo
by UC Berkeley / Stanford (Ho et al.)
System Card
OrganizationUC Berkeley / Stanford (Ho et al.)
Released2025-09
Architectureepisodic-buffer / Concept-level (not instance) abstract memory
DetailsStores reusable, modular natural-language concepts distilled from solution traces. Concepts are retrieved and integrated into prompts for future queries, enabling test-time continual learning without weight updates.
Parameters—
Domainlifelong-learningagent-memory
Open SourceYes
PaperView Paper
CodeRepository
arc-agiconcept-memorytest-timeabstract-reasoning
Capability Profile
Benchmark Scores
6 of 14 benchmarksLong-Context Retrieval0/5
RULER
no dataNIAH
no dataLooGLE
no dataLongBench
no data∞Bench
no dataMulti-Turn Recall2/2
Cross-Session Memory1/1
Agent Task Memory1/1
Personalization0/1
PerLTQA
no dataFactuality / Grounding0/1
RAGAS
no dataSources:ArcMemo paper (arXiv:2509.04439); evaluated on LoCoMo: Long-Term Conversational Memory Benchmark (Snap Research, 2402)ArcMemo paper (arXiv:2509.04439); evaluated on LongMemEval: Benchmarking Chat Assistants on Long-Term Interactive Memory (Salesforce AI Research, 2410)ArcMemo paper (arXiv:2509.04439); evaluated on AgentBench Memory Track (Tsinghua KEG, 2308)ArcMemo paper (arXiv:2509.04439); evaluated on BABILong: Testing the Limits of LLMs with Long-Context Reasoning-in-a-Haystack (AIRI, 2406)ArcMemo paper (arXiv:2509.04439); evaluated on HotpotQA: A Dataset for Diverse, Explainable Multi-hop Question Answering (Stanford / CMU, 1809)ArcMemo paper (arXiv:2509.04439); evaluated on MemoryBank: Enhancing LLMs with Long-Term Memory (Sun Yat-sen University, 2305)