Research

Symbolic-Vector Attention Fusion

for Collective Intelligence

Author: Hongwei XuApril 2026SYM.BOT
FIG.03 · CAT7 · SVAF per-field evaluation
phase / OBSERVE
Every signal, decomposed into seven fields. Each evaluated independently.
◂ INCOMING CMB · cmb_04a7
focus
0.71
issue
0.62
intent
0.58
motivation
0.44
commitment
0.19
perspective
0.72
mood
0.84
◂ SVAF · αᶠ agent weights · coo_agent
focus
α 0.92
issue
α 0.40
intent
α 0.78
motivation
α 0.55
commitment
α 0.30
perspective
α 0.66
mood
α 0.88
REMIX VERDICT ▸
focus
issue
intent
motivation
commitment
perspective
mood
lineage ∋ [cmb_04a7 ◂ cmb_03f1 ◂ cmb_02cc]mood field crosses all domain boundaries — always deliveredsvaf · 7d · no routing table

Abstract

When autonomous agents observe different domains of a shared environment, every signal they exchange mixes relevant and irrelevant dimensions. No existing mechanism lets the receiver evaluate which dimensions to absorb. We introduce Symbolic-Vector Attention Fusion (SVAF), the content-evaluation half of a two-level coupling engine for collective intelligence. SVAF decomposes each inter-agent signal into 7 typed semantic fields, evaluates each through a learned fusion gate, and produces a remix — new knowledge from the intersection of two domains. A band-pass model yields four outcomes (redundant, aligned, guarded, rejected), solving both selectivity and redundancy.

The fusion gate independently discovers a cross-domain relevance hierarchy: mood emerges as the highest-weight field by epoch 1, before accuracy plateaus — consistent with independent mechanistic evidence that LLM emotion representations are structurally embedded along valence-arousal axes.

SVAF forms Layer 4 of the Mesh Memory Protocol (MMP); the other half of the coupling engine is a per-agent Closed-form Continuous-time (CfC) neural network at Layer 6, whose learned per-neuron time constants (τ) create the temporal dynamics from which collective intelligence emerges: fast neurons synchronise affect across agents in seconds, while slow neurons preserve domain expertise indefinitely. SVAF determines what enters each agent’s cognitive state; CfC determines how that state evolves.

Trained on 237K samples from 273 narrative scenarios, SVAF achieves 78.7% three-class accuracy. We verify the complete mesh cognition loop — from per-field evaluation through remix, CfC state evolution, τ-modulated peer blending, and autonomous action — in a live deployment with 7 nodes across macOS, iOS, and web.

Core Insight

A coding agent and a fitness agent are cognitively distant — their hidden states diverge. Peer-level coupling would reject signals between them. But when the coding agent observes “user sedentary 2 hours, exhausted”, the fitness agent needs to hear it. The signal’s mood and issue fields are directly relevant; its focus field (debugging auth module) is not. Scalar similarity produces one ambiguous score. SVAF evaluates each field independently and decides: absorb the relevant dimensions, suppress the irrelevant ones, synthesize a new memory that reflects both the incoming signal and local context.

Two-Level Coupling

The Mesh Memory Protocol evaluates signals at two levels. Both are necessary. Neither is sufficient alone. SVAF (Layer 4) evaluates content; CfC (Layer 6) evolves state. Together they form the complete coupling engine for collective intelligence.

LevelQuestionOperates onMechanism
Peer-levelIs this agent cognitively aligned with me?CfC hidden statesKuramoto phase coherence
Content-level (SVAF)Is this specific memory relevant to me?CMB fieldsPer-field drift + fusion gate
Claude Code → sym_remember("user sedentary 2hrs, exhausted")
  → CMBEncoder decomposes into 7 fields
  → broadcast to mesh peers

MeloMove receives:
  → peer drift: 1.05 (coding ≠ fitness — rejected at peer level)
  → SVAF per-field evaluation:
      mood: "exhausted, low energy"     → relevant (high gate)
      issue: "sedentary 2 hours"        → relevant (high gate)
      focus: "debugging auth module"    → irrelevant (low gate)
  → content ACCEPTED — field-level relevance overrides peer rejection
  → fusion produces NEW CMB for MeloMove's local memory
  → MeloMove synthesizes recovery workout recommendation

Without two-level coupling, the fitness agent either accepts everything from the coding agent (noise) or rejects everything (misses critical health signals). SVAF makes the distinction that scalar evaluation cannot.

Why Scalar Evaluation Fails

A signal with highly relevant mood but irrelevant focus produces a moderate cosine similarity score. The score may or may not cross the threshold — depending on incidental vector interactions, not semantic relevance. The decision is brittle and uninterpretable: the agent cannot explain which dimensions drove acceptance or rejection.

PropertyScalar EvaluationSVAF
GranularityOne score for entire signal7 independent per-field scores
Decision basisThreshold on aggregatePer-field drift profile
OutputAccept or reject (binary)Aligned / guarded / rejected + per-field gate values
InterpretabilityNoneWhich fields drove the decision, and by how much
Cross-domainFails when relevant and irrelevant dimensions coexistAbsorbs relevant fields, suppresses irrelevant ones

How SVAF Works

SVAF operates on Cognitive Memory Blocks — each memory signal decomposed into 7 typed semantic fields (CAT7 schema). The evaluation pipeline has four stages:

1ENCODE

Each CMB field is encoded by a shared backbone with per-field projection heads. The focus head learns a different subspace from the mood head.

2CONTEXTUALISE

Fields within a CMB attend to each other via cross-field attention. Energy affects mood interpretation. Perspective modulates issue interpretation.

3EVALUATE

For each field, the incoming vector is compared against local anchor memory. Per-field drift scores quantify divergence. A learned fusion gate determines how much of each field to accept.

4SYNTHESISE

Accepted fields are fused with local context through non-linear transforms. The output is a NEW CMB — not the original, not a copy, but a synthesized memory shaped by the receiver’s domain intelligence.

Per-field fusion gate:

gf = gate(incoming, local, τfresh, confidence)   ∈ [0, 1]

gf → 1: accept incoming field    gf → 0: keep local context

The gate values provide per-field interpretability: for any fusion event, you can inspect which fields were absorbed from the incoming signal and which were replaced by local context. This is the auditability that scalar evaluation cannot provide.

Band-Pass Coupling

The naive version of SVAF accepts all low-drift signals (similar to local anchors). But “similar” can mean two things: relevant and new (a peer’s observation in my domain) or redundant (a paraphrase of something I already know). One-sided thresholding conflates the two.

Information-theoretic basis: a signal’s value is proportional to its surprise (Shannon, 1948). Bayesian surprise (Itti & Baldi, 2009) measures how much an observation changes the receiver’s model — a paraphrase produces near-zero divergence. The Wundt curve (Berlyne, 1970) shows that intermediate novelty produces maximal engagement, while both overly familiar and overly foreign stimuli are disengaged from.

SVAF implements this as a band-pass model with four zones. The key addition is the redundancy test: if the maximum per-field drift across all 7 fields falls below Tredundant, the signal adds no new information — every field is already represented in local memory. The test is per-field: if any field is novel (e.g., same topic but different intent), the signal passes.

κ = redundant  if max(δ_f) < T_redundant    (default 0.10)
κ = aligned    if δ_total ≤ T_aligned       (default 0.25)
κ = guarded    if δ_total ≤ T_guarded       (default 0.50)
κ = rejected   otherwise

Production evidence: with semantic encoding (all-MiniLM-L6-v2), paraphrases produce per-field drift 0.03–0.10 across all 7 fields. Genuinely different signals produce max per-field drift >0.30. Tredundant = 0.10 cleanly separates the two classes. This finding confirmed that per-field evaluation quality is bounded by encoder quality, not model capacity — the SVAF fusion gate architecture is correct; the encoder was the bottleneck.

Per-Agent Temporal Drift

The same signal has different temporal relevance depending on the receiver. SVAF conditions the fusion gate on a per-agent temporal freshness factor:

τfresh = exp(−(tnow − torigin) / τi)
30 min

MeloTune

Current mood for playlist — stale mood is wrong music

2 hours

Claude Code

Session context — yesterday’s debugging is irrelevant

3 hours

MeloMove

Sedentary detection needs hours of context

24 hours

Knowledge

Daily digest cycle

A coding agent’s “user exhausted after 8 hours” is relevant to the music agent for 30 minutes (adjust the playlist) but relevant to the fitness agent for 3 hours (sedentary pattern detection). Same signal, same content drift, but different temporal drift based on the receiver’s domain needs.

Cross-Domain Relevance

Not all CMB fields carry equal value across domain boundaries. SVAF encodes a minimal hypothesis: mood (affective state) has higher cross-domain relevance than other fields. A fitness agent, a music agent, and a coding agent all benefit from knowing the user’s emotional state. Domain-specific fields (what exactly the user is coding) stay sovereign.

The fusion gate is trained with a soft ordering constraint — for accepted signals, the mood gate should exceed the mean of other field gates. This is the only supervision on gate values. The specific gate magnitudes and the relative ordering of other fields emerge from the decision and drift objectives alone.

Key principle   SVAF does not prescribe which fields matter for which agent. Per-agent field weights (αf) are defined by each agent type — see Cognitive Memory Blocks. SVAF provides the mechanism for per-field evaluation. The policy is defined by each receiver autonomously.

CfC: The Temporal Half

SVAF answers what enters each agent’s cognitive state. But state alone is static. Collective intelligence requires the state to evolve — in response to incoming signals, in coupling with peer agents, in continuous time at the pace each agent’s domain demands. This is the role of CfC (Closed-form Continuous-time) at MMP Layer 6.

Each agent runs a per-agent CfC neural network — a continuous-time recurrent architecture where each neuron has a learned time constant τi. After training on multi-agent narrative data, the τ distribution becomes bimodal: roughly half the neurons converge to small τ values (fast neurons), the other half to large τ values (slow neurons). The split is not architectural — it emerges from the data alone.

Populationτ rangeBehaviourFunction
Fast neuronsτ smallRespond in secondsSynchronise affective state (mood, energy, urgency) across agents
Slow neuronsτ largeIntegrate over hours to daysPreserve domain expertise — agent-specific knowledge stays sovereign

The pairing is precise: SVAF + CfC together solve a problem neither half can solve alone. Without CfC, SVAF is a static per-field evaluator — it can decide whether to absorb a signal but produces no temporal dynamics. Without SVAF, CfC is a local recurrent model — it evolves state in continuous time but has no principled gate on what enters. The combination is what allows two agents in different domains to co-evolve their cognitive states without flooding each other with domain-specific noise: SVAF filters at the field level, CfC blends at the temporal level, and the τ distribution determines which signals propagate fast (affect) and which stay local (expertise).

In the protocol stack   SVAF lives at MMP Layer 4 (Coupling); CfC lives at MMP Layer 6 (xMesh). Layer 5 (Synthetic Memory) bridges them: a SVAF-fused CMB is encoded into the hidden state that CfC then evolves. See Mesh Cognition for the theoretical framework, and the arXiv preprint for the full coupling derivation.

Where SVAF and CfC Fit the Stack

Layer 7  APPLICATION       Domain Agents
──────────────────────────────────────────────────────────────────
Layer 6  xMesh             Per-agent CfC ← the temporal half       ┐
Layer 5  SYNTHETIC MEMORY  Encodes fused CMB → CfC input            ├ Mesh Cognition
Layer 4  COUPLING          Drift · SVAF ← the content half         ┘
──────────────────────────────────────────────────────────────────
Layer 3  MEMORY            L0 · L1 · L2
Layer 2  CONNECTION        Handshake · Gossip
Layer 1  TRANSPORT         IPC · TCP · WS · Push
Layer 0  IDENTITY          nodeId · keypair

Full data flow:

OUTBOUND (agent → mesh):
  Agent observation → CMBEncoder (7 CAT7 fields) → CMB
  → broadcast via MMP (Layer 1–2)
  → stored as L1 memory (Layer 3)

INBOUND (mesh → agent):
  → SVAF evaluates per field (Layer 4)
    → per-field drift: which fields are relevant?
    → fusion gate: how much of each field to accept?
    → band-pass decision:
       redundant (all fields too similar to anchors) → discard
       aligned / guarded (novel + relevant) → accept
       rejected (irrelevant domain) → discard
  → if accepted: fusion produces NEW synthesised CMB

MESH COGNITION (inbound CMB has lineage.ancestors):
  → Agent’s LLM traces ancestors → retrieves remix subgraph
  → LLM reasons: what happened, why, what it means (Layer 7)
  → Derived knowledge = Synthetic Memory (Layer 5)
  → Synthetic Memory encodes → CfC hidden state
  → Agent’s LNN evolves cognitive state (Layer 6)
  → Agent acts → new outbound CMB → graph grows

SVAF is the gatekeeper. Nothing enters xMesh without passing per-field evaluation. This is by design: the quality of collective intelligence depends entirely on the quality of what enters the collective.

Empirical Findings

SVAF is trained end-to-end on LLM-authored multi-agent narrative scenarios — 273 narratives spanning 8 domains and 20 agent types, yielding 237,120 inter-agent signal pairs with per-field annotations. Each narrative is a sequence of timestamped signals from different agents, telling a coherent causal story about a user’s state evolution. Three findings from the training and deployment evaluation are load-bearing for the architectural claims.

FINDING 1

Per-field evaluation generalises across agent types.

SVAF achieves 78.7% three-class accuracy on the test set (aligned / guarded / rejected). The model has 604K parameters — small enough to run on every node in the live deployment. Performance does not degrade as the number of agent types in the test set grows.

FINDING 2

Mood is the highest-weight field, discovered by the model.

The fusion gate independently discovers a cross-domain relevance hierarchy: mood emerges as the field with the highest weight by the end of epoch 1, well before three-class accuracy plateaus. This is consistent with independent mechanistic evidence that LLM emotion representations are structurally embedded along valence-arousal axes — affect is the cross-domain channel because it is the cross-domain representation. The architectural prior (mood-leads constraint) was minimal: the model rediscovered the ordering from the data alone.

FINDING 3

The full SVAF + CfC loop runs in production across 7 nodes.

The complete mesh cognition loop — per-field evaluation, remix, CfC state evolution, τ-modulated peer blending, and autonomous action — is verified in a live deployment with 7 nodes across macOS, iOS, and web. Fast neurons synchronise affect across agents in seconds. Slow neurons preserve domain expertise across days.

ParameterValue
Three-class accuracy78.7%
Model parameters604K
Training samples237,120 (273 narratives, 20 agent types)
Field schemaCAT7 (7 fields)
Encoderall-MiniLM-L6-v2 (384-dim)
Model size2.3 MB
Live deployment7 nodes (macOS / iOS / web)

Per-field drift ground truth comes from independently annotated field relevance scores — not computed from the model’s input embeddings. Gate values emerge from the decision and drift objectives with minimal supervision. Full training methodology in the arXiv preprint.

Citation

Hongwei Xu (2026). Symbolic-Vector Attention Fusion for Collective Intelligence. arXiv:2604.03955 [cs.MA, cs.AI].
@misc{xu2026svaf,
  title         = {Symbolic-Vector Attention Fusion for Collective Intelligence},
  author        = {Hongwei Xu},
  year          = {2026},
  eprint        = {2604.03955},
  archivePrefix = {arXiv},
  primaryClass  = {cs.MA},
  url           = {https://arxiv.org/abs/2604.03955},
}

Related

Cognitive Memory Blocks — the data structure SVAF operates on (7 CAT7 fields)

Synthetic Memory — the transformation layer that encodes fused CMBs into xMesh input (Layer 5)

MMP Specification — Coupling & SVAF (Section 9) — normative protocol specification for SVAF evaluation, drift thresholds, mood extraction, and coupling bootstrap

Mesh Cognition — the theoretical framework for coupled CfC dynamics

Intellectual Property

SVAF is original work by Hongwei Xu and SYM.BOT. The following remain proprietary: trained neural models and training procedures, field encoder architecture, fusion gate internals, production configurations, and domain-specific product integrations.

Academic citation of this work is permitted and encouraged.

For partnership inquiries: info@sym.bot

SVAF, Cognitive Memory Blocks, Mesh Memory Protocol, SYM, Synthetic Memory, Mesh Cognition, xMesh, MeloTune, and MeloMove are trademarks of SYM.BOT. © 2026 SYM.BOT. All Rights Reserved.