Back to Arena
Titans
by lucidrains (community) / paper by Google Research
System Card
Organizationlucidrains (community) / paper by Google Research
Released2025-01
Architectureexternal-memory-network / Neural memory module (Memory-as-Context)
DetailsUnofficial reference PyTorch implementation of Google's Titans architecture: a learned NeuralMemory module that updates parameters at test time, combined with Memory-As-Context (MAC) transformer wrapping for long-term memory.
Parameters—
Domainlong-contextlifelong-learning
Open SourceYes
PaperView Paper
CodeRepository
neural-memorytest-time-learningtransformergoogle-research
Capability Profile
Benchmark Scores
8 of 14 benchmarksMulti-Turn Recall1/2
MemoryBank
no dataCross-Session Memory1/1
Multi-Hop QA1/3
Agent Task Memory0/1
AgentBench-Mem
no dataPersonalization0/1
PerLTQA
no dataFactuality / Grounding0/1
RAGAS
no dataSources:arXiv:2501.00663 Table 2 — Titans (MAC) avg on RULER S-NIAH-PK/N/W at 2K/4K/8K/16KarXiv:2501.00663 Table 2 — Titans (MAC) S-NIAH-PK avg: 99.2/98.8/99.0/98.4 at 2K/4K/8K/16KTitans paper (arXiv:2501.00663); evaluated on BABILong: Testing the Limits of LLMs with Long-Context Reasoning-in-a-Haystack (AIRI, 2406)Titans paper (arXiv:2501.00663); evaluated on InfiniteBench: Extending Long Context Evaluation Beyond 100K Tokens (Tsinghua / OpenBMB, 2402)Titans paper (arXiv:2501.00663); evaluated on LoCoMo: Long-Term Conversational Memory Benchmark (Snap Research, 2402)Titans paper (arXiv:2501.00663); evaluated on LongBench: A Bilingual, Multitask Benchmark for Long Context Understanding (Tsinghua KEG, 2308)Titans paper (arXiv:2501.00663); evaluated on LongMemEval: Benchmarking Chat Assistants on Long-Term Interactive Memory (Salesforce AI Research, 2410)Titans paper (arXiv:2501.00663); evaluated on LooGLE: Can Long-Context Language Models Understand Long Contexts? (Peking University, 2311)