Back to Arena
Memorizing Transformer
by Google Research (Wu, Rabe, Hutchins, Szegedy)
System Card
OrganizationGoogle Research (Wu, Rabe, Hutchins, Szegedy)
Released2022-03
Architectureexternal-memory-network / Non-differentiable kNN lookup over (key,value) pairs
DetailsApproximate kNN lookup into a non-differentiable cache of recent attention (key, value) pairs. Scales the effective attention context up to 262k tokens.
Parameters—
Domainlong-contextlifelong-learning
Open SourceNo
PaperView Paper
WebsiteVisit
iclr-2022-spotlightknnnon-differentiable262k
Capability Profile
Benchmark Scores
6 of 14 benchmarksMulti-Turn Recall1/2
MemoryBank
no dataCross-Session Memory1/1
Multi-Hop QA1/3
Agent Task Memory0/1
AgentBench-Mem
no dataPersonalization0/1
PerLTQA
no dataFactuality / Grounding0/1
RAGAS
no dataSources:Memorizing Transformer paper (arXiv:2203.08913); evaluated on BABILong: Testing the Limits of LLMs with Long-Context Reasoning-in-a-Haystack (AIRI, 2406)Memorizing Transformer paper (arXiv:2203.08913); evaluated on InfiniteBench: Extending Long Context Evaluation Beyond 100K Tokens (Tsinghua / OpenBMB, 2402)Memorizing Transformer paper (arXiv:2203.08913); evaluated on LoCoMo: Long-Term Conversational Memory Benchmark (Snap Research, 2402)Memorizing Transformer paper (arXiv:2203.08913); evaluated on LongBench: A Bilingual, Multitask Benchmark for Long Context Understanding (Tsinghua KEG, 2308)Memorizing Transformer paper (arXiv:2203.08913); evaluated on LongMemEval: Benchmarking Chat Assistants on Long-Term Interactive Memory (Salesforce AI Research, 2410)Memorizing Transformer paper (arXiv:2203.08913); evaluated on LooGLE: Can Long-Context Language Models Understand Long Contexts? (Peking University, 2311)