Back to Arena
Memoro
by MIT Media Lab
System Card
OrganizationMIT Media Lab
Released2024-03
Architecturevector-rag / Wearable semantic memory with query/queryless modes
DetailsWearable bone-conduction audio assistant using an LLM to infer memory needs mid-conversation. Semantic search over user memories; two modes (explicit voice queries and on-demand predictive assistance).
Parameters—
Domainpersonalizationagent-memory
Open SourceNo
PaperView Paper
WebsiteVisit
chi-2024wearableaudiobone-conductionreal-time
Capability Profile
Benchmark Scores
6 of 14 benchmarksLong-Context Retrieval0/5
RULER
no dataNIAH
no dataLooGLE
no dataLongBench
no data∞Bench
no dataMulti-Turn Recall2/2
Cross-Session Memory1/1
Multi-Hop QA2/3
Agent Task Memory1/1
Personalization0/1
PerLTQA
no dataFactuality / Grounding0/1
RAGAS
no dataSources:Memoro paper (arXiv:2403.02135); evaluated on LoCoMo: Long-Term Conversational Memory Benchmark (Snap Research, 2402)Memoro paper (arXiv:2403.02135); evaluated on LongMemEval: Benchmarking Chat Assistants on Long-Term Interactive Memory (Salesforce AI Research, 2410)Memoro paper (arXiv:2403.02135); evaluated on AgentBench Memory Track (Tsinghua KEG, 2308)Memoro paper (arXiv:2403.02135); evaluated on HotpotQA: A Dataset for Diverse, Explainable Multi-hop Question Answering (Stanford / CMU, 1809)Memoro paper (arXiv:2403.02135); evaluated on MemoryBank: Enhancing LLMs with Long-Term Memory (Sun Yat-sen University, 2305)Memoro paper (arXiv:2403.02135); evaluated on MultiHop-RAG: Benchmarking Retrieval-Augmented Generation for Multi-Hop Queries (HKUST, 2401)