Back to Arena

RWKV

by RWKV Foundation / BlinkDL community

System Card

OrganizationRWKV Foundation / BlinkDL community
Released2023-05
Architectureexternal-memory-network / Linear-attention RNN with receptance-weighted key-value
DetailsCombines Transformer-style parallelizable training with RNN-style linear-time inference through a receptance-weighted key-value (RWKV) attention. Constant memory, no KV cache, unbounded context length.
Parameters
Domainlong-context
Open SourceYes
WebsiteVisit
rnnlinear-attentionconstant-memoryefficient

Capability Profile

Benchmark Scores

6 of 14 benchmarks
Long-Context Retrieval
5/5
RULER
75.887p
NIAH
71.215p
LooGLE
73.927p
∞Bench
72.616p
Multi-Turn Recall
0/2
LoCoMo
no data
MemoryBank
no data
Cross-Session Memory
0/1
LongMemEval
no data
Multi-Hop QA
1/3
BABILong
77.272p
MultiHop-RAG
no data
HotpotQA
no data
Agent Task Memory
0/1
AgentBench-Mem
no data
Personalization
0/1
PerLTQA
no data
Factuality / Grounding
0/1
RAGAS
no data