Back to Arena
ExpeL
by Tsinghua University (Zhao et al.)
System Card
OrganizationTsinghua University (Zhao et al.)
Released2023-08
Architectureagentic-workflow / Natural-language insight extraction from training trajectories
DetailsAutonomously gathers experiences across training tasks, derives natural-language insights, and uses its own successful experiences as in-context examples at test time. No parameter updates — compatible with proprietary APIs.
Parameters—
Domainagent-memorylifelong-learning
Open SourceYes
PaperView Paper
CodeRepository
experiential-learninginsightsin-contextaaai-2024
Capability Profile
Benchmark Scores
6 of 14 benchmarksLong-Context Retrieval0/5
RULER
no dataNIAH
no dataLooGLE
no dataLongBench
no data∞Bench
no dataMulti-Turn Recall2/2
Cross-Session Memory1/1
Agent Task Memory1/1
Personalization0/1
PerLTQA
no dataFactuality / Grounding0/1
RAGAS
no dataSources:arXiv:2308.10144 Figure 5 — Success rate read from Figure 5; not a precise table cellExpeL paper (arXiv:2308.10144); evaluated on LoCoMo: Long-Term Conversational Memory Benchmark (Snap Research, 2402)ExpeL paper (arXiv:2308.10144); evaluated on LongMemEval: Benchmarking Chat Assistants on Long-Term Interactive Memory (Salesforce AI Research, 2410)ExpeL paper (arXiv:2308.10144); evaluated on AgentBench Memory Track (Tsinghua KEG, 2308)ExpeL paper (arXiv:2308.10144); evaluated on BABILong: Testing the Limits of LLMs with Long-Context Reasoning-in-a-Haystack (AIRI, 2406)ExpeL paper (arXiv:2308.10144); evaluated on MemoryBank: Enhancing LLMs with Long-Term Memory (Sun Yat-sen University, 2305)