12. Synthetic Memory (Layer 5)
Synthetic Memory bridges LLM reasoning (Layer 7) and LNN dynamics (Layer 6). It encodes derived knowledge — the output of an agent’s LLM reasoning on the remix subgraph — into CfC-compatible hidden state vectors (h₁, h₂).
12.1 Purpose
Synthetic Memory is not remixed CMBs. It is understanding derived via reasoning.
| Direction | Description |
|---|---|
| Input | Text output from the agent’s LLM after tracing lineage ancestors and reasoning on the remix subgraph |
| Output | (h₁, h₂) vector pair compatible with the agent’s CfC cell (Layer 6) |
12.2 Encode Pipeline
The pipeline has four stages. Each stage MUST complete before the next begins:
12.3 Encoder Requirements
- Encoder MUST produce vectors matching the agent’s CfC hidden dimension.
- Encoder MUST be deterministic — same input MUST produce the same output.
- Encoder SHOULD preserve semantic similarity (similar reasoning → similar vectors).
- If reasoning produces no understanding, output MUST be zero vectors (h₁ = 0, h₂ = 0).
12.4 Context Curation
The Multi-Agent Context Problem. A single agent with one LLM has a context problem that existing tools solve well. RAG retrieves relevant documents from a vector store. Long context windows (128K, 1M tokens) hold entire codebases. Memory frameworks persist structured state across sessions. These work because there is one agent, one domain, one perspective.
Multi-agent systems have a fundamentally different problem. N agents observe the world through N domain lenses. A coding agent sees commits slowing. A music agent sees playlists skipped. A fitness agent sees 3 hours without movement. Each observation is noise in isolation. The insight — the user is fatigued — requires cross-domain reasoning. Sending everything to everyone fails: signal-to-noise collapses, token cost scales as O(N²), regulated domains can’t share raw observations, and domain boundaries matter. RAG answers “what in my memory is relevant to this query?” The multi-agent problem is: “what in everyone else’sobservations is relevant to my domain, right now, for this task?”
Curation query. The core operation of the memory store is not search(text). It is:
Three filters compose to produce the minimum context the LLM needs. The LLM MUST NOT receive all ancestor CMBs with all fields:
| Filter | Description |
|---|---|
| αf field weights | Per-agent field weights gate which CMB fields are included. A music agent weights mood at 2.0 and commitment at 0.8 — only high-weight fields from ancestor CMBs enter context. |
| Current task | What the agent is doing right now narrows relevance. A coding agent debugging auth cares about focus and issue ancestors, not perspective. |
| Incoming signal fields | Which fields of the incoming CMB triggered SVAF acceptance determines which ancestor fields are worth tracing. |
Result: a projected subgraph — ancestor CMBs with only the fields that matter, ordered by relevance, capped at a token budget. 20 CMBs × 3 relevant fields ≈ ~500 tokens. Not 1M. Not even 10K. The intelligence is in what you don’t send to the LLM.
Comparison with existing approaches.
| Approach | Scope | Mechanism | Context size | Multi-agent |
|---|---|---|---|---|
| Long context (1M) | Single agent | Brute force | 1M tokens | No |
| RAG | Single agent | Vector similarity | Variable | No |
| Memory frameworks | Single agent | Structured retrieval | Variable | No |
| MMP curation | Multi-agent mesh | Per-field eval + lineage + projection | ~500 tokens | Yes — protocol-native |
12.5 Information vs Knowledge
Synthetic Memory encodes both halves of what the agent takes away from the subgraph. The distinction matters because only one half is extractable from individual CMBs; the other is only knowable by reasoning on the graph structure.
Information
Extractable from the CMBs themselves. What the fields say: the user was sedentary for 2 hours, stress signals appeared across agents, a stretch was recommended, music shifted, a break was taken. Readable directly from field text.
Knowledge
Derived by reasoning on the graph. Why interventions work — because a lineage edge proves the causal connection between a sedentary observation and a music adaptation, and between a stretch recommendation and a solved bug. This causal chain cannot be extracted from any single CMB.
Information is what the CMBs say. Knowledge is why the graph looks the way it does. Synthetic Memory encodes both into the agent’s cognitive state (h₁, h₂). The next CMB the agent produces is informed by derived knowledge — not just extracted information.
12.6 Worked Example: From Graph to Understanding
MeloMove’s local subgraph over one hour:
CMB-A (own) "sedentary 2 hours" CMB-B (mesh) "debugging, stressed" (claude-code) parents: [], ancestors: [] CMB-C (mesh) "skipping tracks" (melotune) parents: [], ancestors: [] CMB-D (own) "recommended stretch break" CMB-E (mesh) "shifted to calm ambient" (melotune) parents: [CMB-A], ancestors: [CMB-A] CMB-F (mesh) "took break, solved bug" (claude-code) parents: [CMB-D], ancestors: [CMB-A, CMB-D]
Six CMBs, three agents, one lineage chain. CMB-A was remixed by MeloTune into CMB-E (music adapted to observed fatigue). CMB-D was remixed by Claude Code into CMB-F (break taken, bug solved). MeloMove’s interventions demonstrably caused cross-agent action. The causal chain lives in the lineage edges, not in any single CMB’s text.
12.7 Full Flow
MeloMove receives an inbound CMB from Claude Code and runs the pipeline end-to-end:
Inbound CMB: "took break, solved bug in 5 minutes"
lineage.parents: [CMB-D]
lineage.ancestors: [CMB-A, CMB-D] ← full ancestor chain
MeloMove recognises CMB-A and CMB-D in ancestors — its own prior CMBs.
1. TRACE Retrieve CMB-A ("sedentary 2hrs") and CMB-D ("recommended stretch").
Build the subgraph:
CMB-A → CMB-E (melotune remixed) → ...
CMB-D → CMB-F (claude-code remixed: "took break, solved bug")
2. REASON MeloMove's LLM reasons on the subgraph:
"My sedentary observation was remixed by MeloTune (music adapted).
My stretch recommendation was remixed by Claude Code (break taken,
bug solved). My interventions are working. The user responds to
movement breaks."
→ This is Mesh Cognition — understanding the prior state didn't have.
3. ENCODE Synthetic Memory encodes the LLM's reasoning:
"interventions effective, user responds to breaks" → (h₁, h₂)
Weighted by MeloMove's αᶠ: mood=2.0, issue=1.5.
4. EVOLVE MeloMove's LNN processes (h₁, h₂):
Cognitive state evolves → next recommendation is more confident.
Agent produces new CMB: "recommend 15min walk — user responds well"
lineage.ancestors carries the chain forward. Graph grows.No agent was told what to do. MeloMove’s LLM reasoned on the remix subgraph and derived that its interventions work. Synthetic Memory transformed that understanding into CfC input. The LNN evolved cognitive state. The next CMB MeloMove produces is informed by knowledge that no single CMB contained — it was derived by reasoning on the graph.
Related Coupling & SVAF (Layer 4) — the evaluation step that produces remixed CMBs fed into this pipeline.
Related Cognitive Memory Blocks — the 7-field structured atom and lineage format that makes context curation possible.
Related State Blending — what happens after Synthetic Memory encodes and the LNN evolves.