Back to Arena

Compressive Transformer

by DeepMind (Rae et al.)

System Card

OrganizationDeepMind (Rae et al.)
Released2019-11
Architectureexternal-memory-network / Compacted past-activation memory + TransformerXL short-term
DetailsMaintains a TransformerXL-style short-term memory of past activations, but compresses old activations into a compressed memory instead of discarding them. Introduces PG-19 benchmark.
Parameters
Domainlong-context
Open SourcePartial
WebsiteVisit
iclr-2020deepmindpg-19compression

Capability Profile

Benchmark Scores

6 of 14 benchmarks
Long-Context Retrieval
5/5
RULER
75.483p
NIAH
75.138p
LooGLE
77.655p
∞Bench
82.878p
Multi-Turn Recall
0/2
LoCoMo
no data
MemoryBank
no data
Cross-Session Memory
0/1
LongMemEval
no data
Multi-Hop QA
1/3
BABILong
74.142p
MultiHop-RAG
no data
HotpotQA
no data
Agent Task Memory
0/1
AgentBench-Mem
no data
Personalization
0/1
PerLTQA
no data
Factuality / Grounding
0/1
RAGAS
no data