TL;DR
Every AI conversation currently disappears when the session ends — roughly 19.5M tokens of thought vaporized over six months of daily use. The naive fix (summarize everything) loses the why behind decisions. Claude Code's answer is a three-layer architecture — CLAUDE.md for always-on context, MEMORY.md as an index, topic files loaded on demand — plus an offline "dream" cycle that compacts conversations into typed memories without destroying the original signal. This post compares that approach to Memo, Mastra, and MemPalace across the lossy-vs-verbatim spectrum.
The artifacts
- Deck — 10 slides, dark theme, landscape-style comparison of the memory systems. For a talk or executive skim.
- Deep-dive article — 11 sections, TOC, prose. Claude Code-specific internals: how the three layers load, what goes in a topic file, how the dream consolidation runs.
- Source — a Node script that builds the deck (pptxgenjs → PPTX → PDF via sharp) and a Python script that builds the deep-dive (reportlab). Attached so the chain is reproducible.
Deck · The Architecture of AI Memory
Slide outline: title → the problem (19.5M tokens lost) → the memory spectrum (Memo 30-45% · Mastra 94.9% · MemPalace 96.6% R@5) → Claude Code's three layers → the dream cycle → tradeoffs → design principles. Runs ~10 minutes as a talk.
Deep-dive · How Claude Code's Memory Works
Table of contents: (1) The Problem: Ephemeral Intelligence · (2) Architecture Overview · (3) Layer 1: CLAUDE.md · (4) Layer 2: MEMORY.md · (5) Layer 3: Topic Files · (6) The Memory Type System · (7) Writing Memory · (8) The Dream Cycle · (9) Lifecycle: Signal to Knowledge · (10) Comparison with Other Approaches · (11) Design Principles and Tradeoffs.
Source · the artifact chain
Both artifacts are generated from code — not hand-authored in PowerPoint/Word. Regenerate by running:
# deck
pnpm install
node build-memory-deck.js
# deep-dive article
uv run build-memory-deepdive.pyProvenance
- Research + generation: Claude Code session 2026-04-08 (see memory: user_memory_research.md)
- Benchmarks referenced: LongMemEval R@5 — Memo (30-45%), Mastra (94.9%), MemPalace raw/hybrid (96.6%), Claude Code untested at time of writing
- Bundled into Notion: 2026-04-16