Back to Arena
Paradot
by WithFeeling.AI
System Card
OrganizationWithFeeling.AI
Released2023-07
Architecturehierarchical-summary / Memory-to-Understanding Model (M2U)
DetailsParadot's proprietary Memory-to-Understanding Model is a matrix of LLMs ranging from several billion to tens of billions of parameters that captures a real-time evolving memory stream during conversations. Memories are categorized into levels based on their consolidation potential — from fleeting feelings to core facts — building a long-term evolving understanding of each user. Users can directly view, edit, delete, and favorite memories in a visible memory stream, making it the most user-transparent memory system among companion apps.
Parameters—
Domainpersonalizationepisodic-sessionlifelong-learning
Open SourceNo
WebsiteVisit
companionmemory-to-understandingemotional-aivisible-memoryuser-editableevolving
Capability Profile
Benchmark Scores
6 of 14 benchmarksLong-Context Retrieval0/5
RULER
no dataNIAH
no dataLooGLE
no dataLongBench
no data∞Bench
no dataMulti-Turn Recall2/2
Cross-Session Memory1/1
Multi-Hop QA1/3
Agent Task Memory1/1
Personalization1/1
Factuality / Grounding0/1
RAGAS
no dataSources:Paradot vendor documentation; evaluated on LoCoMo: Long-Term Conversational Memory Benchmark (Snap Research, 2402)Paradot vendor documentation; evaluated on LongMemEval: Benchmarking Chat Assistants on Long-Term Interactive Memory (Salesforce AI Research, 2410)Paradot vendor documentation; evaluated on MemoryBank: Enhancing LLMs with Long-Term Memory (Sun Yat-sen University, 2305)Paradot vendor documentation; evaluated on PerLTQA: A Personal Long-Term Memory Question Answering Dataset (PolyU, 2402)Paradot vendor documentation; evaluated on AgentBench Memory Track (Tsinghua KEG, 2308)Paradot vendor documentation; evaluated on BABILong: Testing the Limits of LLMs with Long-Context Reasoning-in-a-Haystack (AIRI, 2406)