Back to Arena

TRIME

by Princeton NLP (Zhong, Lei, Chen)

System Card

OrganizationPrinceton NLP (Zhong, Lei, Chen)
Released2022-05
Architectureexternal-memory-network / Training-time memory augmentation with in-batch memories
DetailsTrains an LM with memory augmentation by treating in-batch examples as accessible memory during training. Adaptable to local, long-term, and external memories at test time without separate encoders.
Parameters
Domainrag-retrievallong-context
Open SourceYes
emnlp-2022training-awarein-batchmemory-aware

Capability Profile

Benchmark Scores

6 of 14 benchmarks
Long-Context Retrieval
4/5
RULER
73.670p
NIAH
no data
LooGLE
75.436p
∞Bench
79.563p
Multi-Turn Recall
0/2
LoCoMo
no data
MemoryBank
no data
Cross-Session Memory
0/1
LongMemEval
no data
Multi-Hop QA
2/3
BABILong
74.446p
MultiHop-RAG
no data
HotpotQA
67.436p
Agent Task Memory
0/1
AgentBench-Mem
no data
Personalization
0/1
PerLTQA
no data
Factuality / Grounding
0/1
RAGAS
no data