Back to Arena
kNN-LM
by Stanford / Facebook AI Research (Khandelwal et al.)
System Card
OrganizationStanford / Facebook AI Research (Khandelwal et al.)
Released2019-11
Architectureexternal-memory-network / Linearly interpolated kNN over LM embedding space
DetailsInterpolates pretrained LM predictions with a kNN distribution over the LM's embedding space. Nearest neighbors drawn from any text collection enable domain adaptation without retraining.
Parameters—
Domainrag-retrievallifelong-learning
Open SourceYes
PaperView Paper
CodeRepository
iclr-2020nearest-neighbordatastorememorization
Capability Profile
Benchmark Scores
6 of 14 benchmarksLong-Context Retrieval1/5
Multi-Turn Recall2/2
Cross-Session Memory1/1
Agent Task Memory0/1
AgentBench-Mem
no dataPersonalization0/1
PerLTQA
no dataFactuality / Grounding0/1
RAGAS
no dataSources:kNN-LM paper (arXiv:1911.00172); evaluated on BABILong: Testing the Limits of LLMs with Long-Context Reasoning-in-a-Haystack (AIRI, 2406)kNN-LM paper (arXiv:1911.00172); evaluated on HotpotQA: A Dataset for Diverse, Explainable Multi-hop Question Answering (Stanford / CMU, 1809)kNN-LM paper (arXiv:1911.00172); evaluated on LoCoMo: Long-Term Conversational Memory Benchmark (Snap Research, 2402)kNN-LM paper (arXiv:1911.00172); evaluated on LongBench: A Bilingual, Multitask Benchmark for Long Context Understanding (Tsinghua KEG, 2308)kNN-LM paper (arXiv:1911.00172); evaluated on LongMemEval: Benchmarking Chat Assistants on Long-Term Interactive Memory (Salesforce AI Research, 2410)kNN-LM paper (arXiv:1911.00172); evaluated on MemoryBank: Enhancing LLMs with Long-Term Memory (Sun Yat-sen University, 2305)