Back to Arena
TRIME
by Princeton NLP (Zhong, Lei, Chen)
System Card
OrganizationPrinceton NLP (Zhong, Lei, Chen)
Released2022-05
Architectureexternal-memory-network / Training-time memory augmentation with in-batch memories
DetailsTrains an LM with memory augmentation by treating in-batch examples as accessible memory during training. Adaptable to local, long-term, and external memories at test time without separate encoders.
Parameters—
Domainrag-retrievallong-context
Open SourceYes
PaperView Paper
CodeRepository
emnlp-2022training-awarein-batchmemory-aware
Capability Profile
Benchmark Scores
6 of 14 benchmarksMulti-Turn Recall0/2
LoCoMo
no dataMemoryBank
no dataCross-Session Memory0/1
LongMemEval
no dataAgent Task Memory0/1
AgentBench-Mem
no dataPersonalization0/1
PerLTQA
no dataFactuality / Grounding0/1
RAGAS
no dataSources:TRIME paper (arXiv:2205.12674); evaluated on LongBench: A Bilingual, Multitask Benchmark for Long Context Understanding (Tsinghua KEG, 2308)TRIME paper (arXiv:2205.12674); evaluated on RULER: What's the Real Context Size of Your Long-Context Language Models (NVIDIA, 2404)TRIME paper (arXiv:2205.12674); evaluated on BABILong: Testing the Limits of LLMs with Long-Context Reasoning-in-a-Haystack (AIRI, 2406)TRIME paper (arXiv:2205.12674); evaluated on HotpotQA: A Dataset for Diverse, Explainable Multi-hop Question Answering (Stanford / CMU, 1809)TRIME paper (arXiv:2205.12674); evaluated on InfiniteBench: Extending Long Context Evaluation Beyond 100K Tokens (Tsinghua / OpenBMB, 2402)TRIME paper (arXiv:2205.12674); evaluated on LooGLE: Can Long-Context Language Models Understand Long Contexts? (Peking University, 2311)