Back to Arena
Atlas
by Meta AI FAIR (Izacard et al.)
System Card
OrganizationMeta AI FAIR (Izacard et al.)
Released2022-08
Architecturevector-rag / Contriever retriever + fusion-in-decoder few-shot
DetailsRetrieval-augmented few-shot learner combining a dense retriever (Contriever) with a fusion-in-decoder reader. Pretrained end-to-end so the retriever and reader co-evolve for knowledge-intensive tasks.
Parameters—
Domainrag-retrievalknowledge-graph
Open SourceYes
PaperView Paper
CodeRepository
jmlr-2023few-shotfusion-in-decoderknowledge-intensive
Capability Profile
Benchmark Scores
6 of 14 benchmarksMulti-Turn Recall0/2
LoCoMo
no dataMemoryBank
no dataCross-Session Memory1/1
Multi-Hop QA2/3
Agent Task Memory0/1
AgentBench-Mem
no dataPersonalization0/1
PerLTQA
no dataFactuality / Grounding1/1
Sources:arXiv:2208.03299 Table 10 — KILT-filtered HotpotQA EM, full-train; 64-shot EM=34.7Atlas paper (arXiv:2208.03299); evaluated on MultiHop-RAG: Benchmarking Retrieval-Augmented Generation for Multi-Hop Queries (HKUST, 2401)Atlas paper (arXiv:2208.03299); evaluated on RAGAS: Automated Evaluation of Retrieval-Augmented Generation (Exploding Gradients, 2309)Atlas paper (arXiv:2208.03299); evaluated on LongBench: A Bilingual, Multitask Benchmark for Long Context Understanding (Tsinghua KEG, 2308)Atlas paper (arXiv:2208.03299); evaluated on LongMemEval: Benchmarking Chat Assistants on Long-Term Interactive Memory (Salesforce AI Research, 2410)Atlas paper (arXiv:2208.03299); evaluated on RULER: What's the Real Context Size of Your Long-Context Language Models (NVIDIA, 2404)