Back to Arena
LightRAG
by HKUDS (HKU Data Intelligence Lab)
System Card
OrganizationHKUDS (HKU Data Intelligence Lab)
Released2024-10
Architecturegraph-rag / Dual-level entity/relation retrieval
DetailsBuilds a KG from entity-relation extraction, then retrieves at two granularities: low-level (specific entities) and high-level (broader concepts), combined with vector search. Supports Neo4j, PostgreSQL, MongoDB backends.
Parameters—
Domainrag-retrievalknowledge-graph
Open SourceYes
PaperView Paper
CodeRepository
graph-ragdual-levelentity-extractionemnlp-2025academic
Capability Profile
Benchmark Scores
6 of 14 benchmarksMulti-Turn Recall0/2
LoCoMo
no dataMemoryBank
no dataCross-Session Memory1/1
Multi-Hop QA2/3
Agent Task Memory0/1
AgentBench-Mem
no dataPersonalization0/1
PerLTQA
no dataFactuality / Grounding1/1
Sources:LightRAG paper (arXiv:2410.05779); evaluated on HotpotQA: A Dataset for Diverse, Explainable Multi-hop Question Answering (Stanford / CMU, 1809)LightRAG paper (arXiv:2410.05779); evaluated on MultiHop-RAG: Benchmarking Retrieval-Augmented Generation for Multi-Hop Queries (HKUST, 2401)LightRAG paper (arXiv:2410.05779); evaluated on RAGAS: Automated Evaluation of Retrieval-Augmented Generation (Exploding Gradients, 2309)LightRAG paper (arXiv:2410.05779); evaluated on LongBench: A Bilingual, Multitask Benchmark for Long Context Understanding (Tsinghua KEG, 2308)LightRAG paper (arXiv:2410.05779); evaluated on LongMemEval: Benchmarking Chat Assistants on Long-Term Interactive Memory (Salesforce AI Research, 2410)LightRAG paper (arXiv:2410.05779); evaluated on RULER: What's the Real Context Size of Your Long-Context Language Models (NVIDIA, 2404)