Back to Arena
MiniRAG
by HKUDS
System Card
OrganizationHKUDS
Released2025-02
Architecturegraph-rag / Heterogeneous text+entity graph for small LMs
DetailsCombines text chunks and named entities in one heterogeneous graph; a topology-aware retriever traverses the graph for lightweight knowledge discovery. Targets small LMs; 25% of LightRAG storage footprint.
Parameters—
Domainrag-retrievalknowledge-graph
Open SourceYes
PaperView Paper
CodeRepository
small-lmheterogeneous-graphlightweightedge-friendly
Capability Profile
Benchmark Scores
6 of 14 benchmarksMulti-Turn Recall0/2
LoCoMo
no dataMemoryBank
no dataCross-Session Memory1/1
Multi-Hop QA2/3
Agent Task Memory0/1
AgentBench-Mem
no dataPersonalization0/1
PerLTQA
no dataFactuality / Grounding1/1
Sources:arXiv:2501.06713 Table 1 — Accuracy with gpt-4o-mini; SLM backbones: Phi-3.5-mini 49.96, GLM-Edge 51.41, Qwen2.5-3B 48.55MiniRAG paper (arXiv:2501.06713); evaluated on HotpotQA: A Dataset for Diverse, Explainable Multi-hop Question Answering (Stanford / CMU, 1809)MiniRAG paper (arXiv:2501.06713); evaluated on RAGAS: Automated Evaluation of Retrieval-Augmented Generation (Exploding Gradients, 2309)MiniRAG paper (arXiv:2501.06713); evaluated on LongBench: A Bilingual, Multitask Benchmark for Long Context Understanding (Tsinghua KEG, 2308)MiniRAG paper (arXiv:2501.06713); evaluated on LongMemEval: Benchmarking Chat Assistants on Long-Term Interactive Memory (Salesforce AI Research, 2410)MiniRAG paper (arXiv:2501.06713); evaluated on RULER: What's the Real Context Size of Your Long-Context Language Models (NVIDIA, 2404)