Research
Mesh Cognition
Agents Learning to Think Beyond Their Own State of Mind
Note Mesh Cognition is an active area of research. The architecture, protocol, and neural models described here continue to evolve as training results refine our understanding of how agents think together. This page reflects the latest published work — check back for updates.
Mesh Cognition Architecture & Flow
Mesh Cognition is the agent’s LLM reasoning on the remix subgraph of CMBs — traced via lineage ancestors — to generate understanding that the agent’s previous state of mind didn’t have. The agent’s LLM traces the ancestors, retrieves the remix chain, and reasons about what happened and why. This derived knowledge feeds the agent’s Synthetic Memory (Layer 5), which the local LNN (Layer 6) processes as continuous-time cognitive state.
Mesh Cognition is a closed loop. Each cycle, the remix graph grows and every agent understands more than it did before. The loop connects all layers of the Mesh Memory Protocol:
SVAF evaluates inbound CMB per field (Layer 4)
Per-field drift, per-agent α_f weights, accept / guard / reject
Accepted → remixed CMB with lineage (Layer 3)
New CMB created with parents + ancestors. Original untouched.
LLM traces ancestors, reasons on remix subgraph (Layer 7)
What happened? Why? What does it mean for my domain? → Mesh Cognition
Synthetic Memory encodes derived knowledge (Layer 5)
LLM output → CfC-compatible hidden state (h₁, h₂)
LNN evolves cognitive state (Layer 6)
Fast τ neurons: mood, reactive. Slow τ neurons: domain, sovereign.
State blended with peers
Per-neuron blending, τ-modulated. Inference-paced, not network-paced.
Agent acts → new CMB with lineage.ancestors
The agent’s response is informed by derived knowledge, not just its own observation.
Broadcast to mesh → other agents remix it
The graph grows. The next cycle starts. Each agent learns.
This is the complete Mesh Cognition cycle. No central model. No orchestrator. Each agent runs its own loop independently. Intelligence emerges from the remix graph — the growing DAG of immutable CMBs connected by lineage.
Example: AI Research Team
Six agents investigate “Are emergent capabilities in LLMs real phase transitions or artefacts of metric choice?”Two explorers pursue different hypotheses simultaneously. A data agent tests both. An external validator challenges methodology (“Chow test assumes linear regime — invalid for scaling laws”). A research PM reprioritises the team. A synthesis agent detects convergence across intent and motivation fields from agents with different perspectives — and produces a new idea none of them stated:“emergence is evaluation-dependent — a property of the measurement apparatus, not the model.”
The validator challenges again: “Produce a falsifiable prediction or downgrade from breakthrough to speculation.”
Seven CMBs. Six agents. Three phases of validation. The DAG traces every claim to its evidence, every challenge to its basis. See the full example with wire protocol in MMP Section 13.5.
Abstract
We present Mesh Cognition, a framework for distributed intelligence in which autonomous devices running continuous-time neural networks are the cognitive mesh, coupled through real-time agent-to-agent hidden state exchange. Unlike federated learning, which shares gradients offline, or multi-agent reinforcement learning, which shares discrete observations, Mesh Cognition enables live inference-time coupling — each node’s neural dynamics are directly influenced by peer cognitive states, where emergent collective intelligence arises from the combination of agent perspectives without central coordination. The framework comprises three contributions: (1) a continuous-time emotional intelligence architecture that models affect as trajectory rather than discrete state, deployed on-device via Closed-form Continuous-time (CfC) neural networks; (2) the Mesh Memory Protocol (MMP), an agent-to-agent protocol for real-time exchange of CfC hidden state vectors over local networks; and (3) a theoretical analysis connecting coupled CfC dynamics to generalized Kuramoto synchronization, predicting emergent properties including collective memory, stability amplification, and phase-locked cognitive alignment. Mesh Cognition is validated in production within MeloTune, an emotion-aware music platform where connected devices develop aligned emotional trajectories through synthesis that influence music curation across the mesh.
1. Introduction
1.1 The Problem: Centralized Dependency
Modern multi-device AI depends on a central brain. Devices share cognitive state only through cloud servers — when the server is available, connected, and responsive. When the central brain is down, sharing stops. Each device falls back to isolated intelligence, cognitively blind to the others despite physical proximity. Phones in the same room, robots in the same warehouse, vehicles on the same highway — all capable, all disconnected.
Existing approaches to multi-device AI fall into three categories, none achieving distributed cognition:
| Approach | Mechanism | Limitation |
|---|---|---|
| Federated Learning | Gradient sharing | Offline training only; no real-time inference coupling |
| Multi-Agent RL | Observation/reward sharing | Discrete time steps; requires joint training |
| Cloud Aggregation | Centralized processing | Single point of failure; latency; privacy exposure |
1.2 The Insight
Closed-form Continuous-time (CfC) neural networks (Hasani et al., 2022) maintain hidden states that evolve through learned time constants. Each neuron’s time constant τ governs its temporal scale — how quickly it responds to input and how long it retains memory. This creates natural frequency bands within the network: fast-τ neurons for reactive processing, slow-τ neurons for integrative memory.
This property makes CfC networks uniquely suited as a coupling medium for distributed cognition. When two CfC networks exchange hidden states, their time constants create natural synchronization dynamics analogous to coupled oscillators in physics. The closed-form solution enables rigorous analysis of coupling behavior, and the compact hidden state requires minimal bandwidth for exchange.
We did not invent CfC networks — that is the work of Hasani et al. at MIT. What we invented is how to couple them: the protocol for real-time agent-to-agent exchange of CfC hidden states between autonomous devices, and the theoretical framework for predicting the emergent behaviour of coupled continuous-time neural networks.
1.3 Contribution
- —Continuous-time emotional intelligence — replacing discrete classification with trajectory modeling, enabling anticipatory rather than reactive AI
- —The Mesh Memory Protocol (MMP) — the first protocol for real-time agent-to-agent coupling of continuous-time neural network hidden states
- —Theoretical analysis — connecting coupled CfC dynamics to Kuramoto synchronization theory, providing predictive tools for mesh behavior
- —Production validation — deployed in MeloTune with on-device learning and zero cloud dependency
1.4 Prior Work and Novelty
The coupling of continuous-time neural network hidden states across autonomous devices is, to our knowledge, without precedent in the literature. The closest related work falls into five categories, none of which achieves what Mesh Cognition proposes:
Federated Learning. McMahan et al. (2017) and subsequent work share model parameters across devices for distributed training. This operates on weight space, not hidden state space, and is an offline training procedure — not real-time inference-time coupling between running models.
Multi-Agent Communication. CommNet (Sukhbaatar et al., NeurIPS 2016) and TarMAC (Das et al., 2019) learn communication protocols between agents. These use discrete-time architectures (LSTMs, transformers) and exchange learned messages, not continuous-time hidden states with temporal dynamics.
Graph Neural ODEs. Poli et al. (2019) and related work couple continuous-time dynamics through graph structure. These are single-system models — all nodes share a training loop and execute on the same device. They do not address autonomous agents with independent models communicating over a network.
Consensus Protocols. Olfati-Saber & Murray (2004) provide the mathematical foundation for agreement in multi-agent dynamical systems. This is the closest theoretical framework, but it operates on prescribed dynamics with known stability properties — not on learned neural representations with heterogeneous time constants.
Reservoir Computing. Tanaka et al. (2019) explore coupled oscillator networks for physical reservoir computing. The dynamics are fixed (not learned), the system is centralised (not distributed), and there is no agent-to-agent protocol.
What is novel. Mesh Cognition introduces the coupling of learned continuous-time neural hidden states between autonomous devices in real time. The novelty is not the CfC architecture (Hasani et al.), not the Kuramoto measurement framework (Kuramoto, 1975), and not the consensus mathematics (Olfati-Saber & Murray, 2004). The novelty is the combination: a protocol (MMP) that enables autonomous devices running independent CfC models to exchange hidden states agent-to-agent, a coupling engine that respects per-neuron learned time constants, and an inference-paced architecture where coupling is driven by each device’s model rhythm rather than network timing.
No prior system couples continuous-time neural dynamics across autonomous devices at inference time.
2. Background
2.1 Continuous-Time Neural Networks
CfC networks (Hasani et al., 2022) derive from Liquid Time-Constant (LTC) networks (Hasani et al., 2021). The key property is that LTC dynamics admit a closed-form solution, eliminating numerical ODE solvers during inference. Each neuron evolves according to learned time constants that determine its temporal memory scale.
CfC networks handle irregular time intervals natively, making them robust to network latency and asynchronous sampling — essential properties for distributed operation.
2.2 Kuramoto Synchronization
The Kuramoto model (1975) describes spontaneous synchronization in populations of coupled oscillators. Above a critical coupling strength, oscillators with different natural frequencies undergo a phase transition from incoherence to coherence. This model provides the theoretical foundation for understanding how coupled CfC networks achieve cognitive synchronization.
2.3 Emotional Intelligence Foundations
Mesh Cognition builds on two prior frameworks developed at SYM.BOT Research:
SVAF (Symbolic-Vector Attention Fusion) — a neural per-field memory evaluation mechanism for multi-agent systems. SVAF decomposes each memory signal into aCognitive Memory Block with 7 typed semantic fields, evaluates per-field drift between incoming and local context, and produces a synthesized memory through learned fusion gates (Xu, 2026).
EI-3 (Emotional Intelligence — 3 Layer Architecture) — an emotional governance protocol establishing source separation between user declarations and machine inferences, drift-bounded fusion, and append-only provenance records. Validated across 1.64M+ query combinations in production (Xu, 2025).
EI-3’s limitation is fundamental: it operates on discrete snapshots with static weighted averaging. It sees where emotion is but not where it is going. Mesh Cognition addresses this directly.
3. Continuous-Time Emotional Intelligence
3.1 Core Paradigm
Emotion is a continuous trajectory, not a discrete state. This is not a metaphor — it is a mathematical formalization. A user is not “calm” or “anxious” at a point in time. They occupy a position in emotional space with velocity, acceleration, and intent. The difference between stable calm and the leading edge of rising anxiety is invisible to discrete systems but explicit in trajectory representation.
3.2 Three-Layer Architecture
Mesh Cognition preserves EI-3’s separation of concerns while replacing every discrete component with its continuous-time equivalent:
| Layer | Name | Purpose |
|---|---|---|
| 1 | UET | User Emotional Trajectory — immutable user declarations as trajectory (position, velocity, intent, bounds) |
| 2 | MEC | Machine Emotional Continuum — CfC-based continuous modeling of emotional dynamics |
| 3 | CRE | Continuous Resolution Engine — ODE-governed dynamic equilibrium between user and machine |
3.3 User Emotional Trajectory (UET)
Users declare emotion through four progressively richer modes:
| Mode | Example | Representation |
|---|---|---|
| Point | “I feel calm” | Position only |
| Direction | “I’m getting anxious” | Position + velocity |
| Goal | “I want to feel energized” | Position + intended direction |
| Bounds | “Keep me above 50 energy” | Position + constraint region |
Affective Integrity. UET records are immutable. Machine inference can never modify a user trajectory record. The user’s declared emotional state is sovereign — a non-negotiable principle.
3.4 Machine Emotional Continuum (MEC)
MEC produces continuous emotional modeling through CfC networks. The CfC closed-form solution (Hasani et al., 2022) absorbs temporal dynamics into learned gating functions that interpolate between two neural network heads, modulated by the elapsed time Δt. Each neuron learns a structural time constant τ during training that determines its temporal receptive field — from rapid emotional reactions (τ ≈ 1s) to stable preference memory (τ ≈ 5s). The distribution of structural time constants across 64 neurons creates a natural temporal hierarchy: the network simultaneously tracks fast-changing signals and slow-evolving patterns without explicit configuration.
For mesh coupling, these structural time constants serve a second role: they modulate per-neuron coupling rate. Fast-τ neurons couple readily with peers (shared emotional reactions); slow-τ neurons resist peer influence (individual identity preservation). This dual role — temporal receptive field for inference, coupling modulation for mesh — emerges from a single set of learned parameters.
MEC output is continuous: emotional position, velocity, trajectory forecast with confidence envelope, soft-learned pattern activations, and inferred user intent.
3.5 Continuous Resolution Engine (CRE)
CRE replaces EI-3’s static weighted fusion with ODE-governed resolution. The resolved emotional state evolves toward dynamic equilibrium between user trajectory, machine continuum, and contextual forces through differential equations with time-varying weight functions.
Unlike EI-3’s scalar threshold conflict detection, CRE detects four categories of trajectory divergence: position divergence, velocity divergence (via dot product), acceleration conflict, and boundary violation. This enables nuanced intervention — a machine inference trending toward the user’s state is treated differently from one diverging, even at the same positional distance.
3.6 Trajectory Provenance
Every resolved emotional trajectory carries complete temporal attribution: user contributions, machine contributions, resolution traces, and conflict resolutions over its temporal span. Any resolved trajectory can be decomposed back to constituent inputs. This provides auditability for regulatory frameworks (GDPR emotional data processing, EU AI Act) and explainability for user trust.
4. The Mesh Memory Protocol
4.1 From Single-Agent to Mesh
The continuous-time architecture described in Section 3 operates on a single device. MMP extends it to the multi-agent case: each node runs its own CfC model, and MMP provides the transport for sharing cognitive state across nodes in real time.
4.2 Design Principles
Cognitive Autonomy. Each node runs a complete model and operates independently. There is no orchestrator, no central coordinator, no master node. Mesh coupling influences but never overrides local cognition.
Agent-to-Agent. No central server. Agents discover each other via zero-configuration networking, negotiate capabilities, and exchange state bilaterally.
Continuous-Time Native. State exchange, blending, and decay operate in continuous time, matching CfC dynamics.
Domain Agnostic. MMP carries hidden state vectors. Semantic meaning depends on the application; the protocol provides transport and coupling mechanics.
Privacy by Design. Hidden states are compact, opaque neural representations. No raw data crosses the mesh. Peer approval is required before exchange.
4.3 Protocol Overview
MMP operates through a layered architecture:
- —Layers 0–3 (Protocol Infrastructure) — Identity (UUID v7, Ed25519 keypair), Transport (length-prefixed JSON over TCP/WebSocket), Connection (DNS-SD discovery, handshake, heartbeat), Memory (immutable CMBs with lineage DAG)
- —Layer 4: Coupling & SVAF — per-field evaluation of incoming CMBs across 7 CAT7 fields (focus, issue, intent, motivation, commitment, perspective, mood) with per-agent field weights (αf)
- —Layer 5: Synthetic Memory — LLM-derived knowledge encoded into CfC-compatible hidden state vectors
- —Layer 6: xMesh — per-agent Liquid Neural Network evolving cognitive state from mesh signals
- —Layer 7: Application — agent’s LLM reasons on the remix subgraph of CMBs traced via lineage ancestors
The key innovation in MMP v0.2.0 is per-field evaluation via SVAF. Each incoming Cognitive Memory Block (CMB) contains 7 structured fields (CAT7). The receiving agent’s SVAF evaluates each field independently against the agent’s own field weights. A music agent weights mood highly and focus low — it accepts “user is fatigued” even when “debugging auth module” is irrelevant. Accepted CMBs are remixed with lineage, creating an immutable DAG that IS the collective intelligence. See the full specification.
4.4 Cognitive Coupling
When nodes exchange hidden states via MMP, each node blends its local CfC hidden state with the aggregated mesh state before inference. The coupling strength α ∈ [0, 1] is configurable per node and determines how strongly the mesh influences local cognition. Mesh state aggregation uses confidence-weighted averaging across connected peers, incorporating source priority and temporal recency.
In MMP v0.2.0, coupling operates at two levels. Peer-level coupling compares hidden state vectors (drift) to decide whether peers are cognitively aligned. Content-level coupling (SVAF) evaluates each CMB per field — 7 independent dimensions, each weighted by the receiving agent’s role. This allows a music agent to accept mood signals from a coding agent while rejecting irrelevant technical detail. The CMB lineage chain connects every remix to its parents, creating an auditable graph of collective reasoning.
4.5 Relationship to Kuramoto Synchronisation
The coupled CfC system bears structural similarity to Kuramoto-type synchronisation, where oscillators with heterogeneous natural frequencies develop coherence through coupling. This analogy provides two things: a measurement framework and a conceptual vocabulary.
Measurement. The Kuramoto order parameter r(t) = |〈eiθ〉|, where θi = atan2(h2i, h1i) per neuron, provides a rigorous measure of phase coherence across the mesh. r(t) → 1 indicates phase-locked synchronisation; r(t) → 0 indicates incoherence. This metric is implemented in the SDK’s NeuralCoupler and validated in production (r(t) = 0.943 observed between coupled MeloTune devices).
Coupling dynamics. The actual coupling mechanism is drift-bounded state blending: each neuron’s coupled state is a weighted interpolation between local and mesh states, with weight modulated by cosine drift, confidence, recency, and per-neuron structural time constant. This differs from Kuramoto’s sinusoidal phase coupling in a key property: state blending is unconditionally stable for coupling strength α < 1, whereas Kuramoto dynamics exhibit a critical threshold below which synchronisation cannot occur.
Three properties emerge from the coupling dynamics:
Temporal hierarchy. Per-neuron coupling rate is inversely proportional to structural τ. Fast neurons converge more rapidly than slow neurons, creating a natural separation of timescales — immediate emotional alignment (fast-τ) coexists with preserved individual identity (slow-τ).
Convergence through successive coupling. Trajectory alignment develops across successive coupling events at mood boundaries. Convergence is monotonic — each coupling event moves states closer (or leaves them unchanged if drift exceeds rejection threshold). The rate of convergence depends on coupling strength α, cosine drift between peers, and the distribution of structural time constants.
Heterogeneous coupling. Unlike classical Kuramoto (uniform coupling strength), each neuron couples at a rate determined by its learned temporal role. This is not an imposed design choice — it follows directly from the τ-modulated coupling formula: α_i = α_effective · K · sim_i / τ_i.
4.6 Stability
The coupled system is stable under standard conditions: contraction mapping guarantees from bounded activations, bounded drift proportional to (1−α), and graceful degradation — when peers disconnect, local state smoothly transitions to autonomous operation with no discontinuity.
5. Inference-Paced Coupling
A fundamental architectural decision distinguishes Mesh Cognition from conventional distributed systems: coupling is driven by the model’s inference rhythm, not by network traffic.
5.1 The Principle
In federated learning, parameter servers, and gossip protocols, computation is triggered when a message arrives from the network. The network’s schedule drives the system. In Mesh Cognition, peer hidden states arrive continuously and accumulate silently. The coupling engine only evaluates them when the local model callscoupledState() — at inference time, on the model’s own clock.
Peer sync: ────●────●────●────●────●────●────●────●────●── (state updates accumulate silently) Inference: ───────────────●────────────────●──────────────── ↑ ↑ couple() couple() evaluates evaluates all peers all peers
5.2 Why This Matters
No wasted computation. The coupling engine runs once per inference step, not on every network message. In MeloTune, the CfC model processes observations at mood boundaries — when the user’s emotional context changes through mood selection, session transition, or station change. Between boundaries, peer state updates arrive via MMP and accumulate. When the next mood boundary triggers inference, the coupler evaluates all accumulated peer states in one pass, and the CfC model processes the new observation through the coupled hidden state with the full elapsed Δt. This event-driven cadence is scientifically valid because CfC networks handle arbitrary time intervals natively — the closed-form solution takes Δt as an explicit input.
Deterministic integration point. Coupling always happens at the same point in the inference pipeline — after the model produces new hidden states and before the next forward pass. The hidden state is never perturbed at unpredictable times mid-inference.
Biologically plausible. Biological neurons do not process every incoming signal the instant it arrives. They integrate inputs over their membrane time constant and fire at their own rhythm. Inference-paced coupling mirrors this: the model integrates peer states at its natural cadence, not at the network’s.
5.3 Connection to Per-Neuron Time Constants
Inference-paced coupling is not just an engineering convenience — it connects directly to the per-neuron time constants (τ) that CfC networks learn during training. Between two inference steps, fast neurons (small τ) will have decayed significantly, making them receptive to peer influence. Slow neurons (large τ) will have barely changed, naturally resisting peer coupling. The inference interval acts as a temporal filter that the learned time constants modulate.
This is what distinguishes Mesh Cognition from messaging systems. Messaging delivers data when it arrives. Cognition integrates information on its own temporal schedule. The coupling respects both the model’s rhythm and each neuron’s learned temporal dynamics.
6. Emergent Properties
When agents combine their perspectives through CfC coupling, properties emerge that no individual node possesses:
6.1 Collective Memory
When multiple nodes experience the same event, the mesh’s CfC hidden states encode a collective memory distributed across slow-τ neurons. This memory persists even if individual nodes reset — reconnecting nodes re-absorb the collective state through hidden state blending.
Collective memory states can be cryptographically committed via Merkle state roots and hash-chained provenance, enabling tamper-evident verification of shared cognitive history without requiring blockchain infrastructure. Zero-knowledge proofs offer a path to verified mesh participation without hidden state disclosure — nodes can prove they hold authentic collective memory without revealing their cognitive state.
6.2 Emergent Stability
Individual nodes may exhibit volatile trajectories. Mesh coupling introduces damping: each node’s trajectory is pulled toward the mesh mean, reducing individual volatility. This is mathematically equivalent to adding a diffusive term to the CfC dynamics.
6.3 Distributed Pattern Detection
Pattern activations exchanged across the mesh create a natural voting mechanism. When multiple nodes detect the same pattern independently, the mesh amplifies the signal. Single-node detections are dampened unless confidence is high. The mesh achieves more reliable pattern recognition than any individual node.
6.4 Phase-Locked Alignment
Fast-τ CfC neurons across coupled nodes tend to phase-lock — a direct analog of neural synchronization observed in biological systems (e.g., gamma-band synchronization during attention). In the MMP context, phase-locking indicates cognitive alignment between devices.
6.5 Scaling
| Mesh Size | Behavior |
|---|---|
| N=1 | Autonomous operation |
| N=2 | Bilateral cognitive resonance |
| N=3–5 | Group dynamics emerge; majority creates gravitational center |
| N=10+ | Collective patterns dominate; individual influence decreases as 1/N |
The transition from individual to collective cognition occurs at small N (typically 3–5), making Mesh Cognition immediately useful for small-group scenarios.
7. Production Validation: MeloTune
7.1 Application Context
MeloTune is a mood-driven music platform developed by SYM.BOT. It uses a CfC neural network to model the user’s emotional trajectory and proactively curate music. The model operates entirely on-device via CoreML with zero cloud dependency for personal data.
7.2 Proactive Curation
MeloTune’s Proactive Curation Engine (PCE) consumes MEC output to predict not just what the user feels but what they want. By projecting the emotional trajectory forward and applying inferred intent (energize, calm, uplift, maintain, focus, process), PCE pre-curates playlists before the user requests them. Music is ready when the user opens the app.
This collapses the traditional emotion-AI pipeline — observe, classify, display, wait for user action — into a single anticipatory step.
7.3 Mesh Cognition in Practice
When two MeloTune devices connect via MMP:
- —Emotional convergence — devices’ mood trajectories align through successive coupling events at mood boundaries. The convergence rate depends on coupling strength α, cosine drift between peers, and the structural time constant distribution. Fast-τ neurons converge more rapidly than slow-τ neurons, as their coupling rate αi is inversely proportional to τi.
- —Coordinated curation — music selections develop thematic and emotional coherence across devices, without playing the same tracks.
- —Resilient individuality — at moderate coupling, each device maintains its unique emotional character while participating in the group field. Strong local signals override mesh influence.
7.3.1 Two-Device Neural Coupling
The following describes observed behaviour when two MeloTune instances (iPhone and Mac) connect via both Bonjour (LAN) and WebSocket relay. Each device runs an independent CfC inference engine with 64 neurons and learned time constants from 1s to 5s.
What flows between devices:
- —Each device publishes its CfC hidden state vectors (h1, h2) via MMP frames at mood boundaries and periodically during playback.
- —The receiving device’s coupling engine evaluates drift against its own hidden state and decides whether to blend the peer’s state into its local trajectory.
- —Coupling happens at inference time — peer states accumulate continuously, but the engine only evaluates them when the app calls
coupledState().
Per-neuron τ-modulated behaviour:
- —Fast neurons (τ ≈ 1s) — encode mood (valence + arousal on the circumplex). Sync quickly between devices. If one agent sees calm (v:0.3, a:-0.4) and another sees energetic (v:0.6, a:0.7), the fast neurons pull both toward a shared affective trajectory within a few coupling cycles.
- —Slow neurons (τ ≈ 5s) — encode listening context and history. Stay largely independent. Each device retains its own genre context, session history, and accumulated patterns.
What stays sovereign per device:
- —Genre selection — the iPhone may play Ambient while the Mac plays Lo-Fi. Genre is a user choice, not a coupled variable.
- —Track selection — each device runs its own curation pipeline. They never play the same tracks.
- —Listening history — slow-τ neurons preserve each device’s accumulated context. Two users with different listening histories will not suddenly converge on slow dimensions.
Observed production metrics (two-device mesh):
- —Peer coupling decision: guarded (drift 0.35, threshold 0.50) — partially related context, sharing with caution.
- —Kuramoto phase coherence: r(t) = 0.94 — high alignment after 3 coupling cycles.
- —iPhone mood: E:45 N:25 (Composed Balance, Ambient). Mac mood: E:46 N:36 (Assured Steadiness, Lo-Fi). Shared emotional zone, independent genre and track selection.
7.4 Validation
Alignment is validated through an automated pipeline covering 164 industry genres, 400 moods per genre, and ~10,000 emotion-energy meter variations — 1.64M+ systematic query combinations. Production telemetry supplements automated testing with real behavioral feedback loops.
8. Applications Beyond Music
Mesh Cognition is domain-agnostic. The same continuous-time coupling mechanics apply wherever multiple devices benefit from shared cognitive state:
- —Collaborative Robotics — swarm coordination via coupled CfC dynamics. Fast-τ neurons synchronize for collision avoidance; slow-τ neurons coordinate task allocation. No central planner required.
- —Autonomous Vehicle Fleets — fleet-level situational awareness. One vehicle detecting a hazard shifts the mesh’s hidden state, causing nearby vehicles to adjust before direct observation.
- —Therapeutic Group Monitoring — real-time collective emotional state across session participants. Group emotional events emerge from the combination of individual agent observations, enabling computational co-regulation.
- —Smart Environment Orchestration — devices in a smart environment share cognitive state, developing emergent “atmosphere” that no single device computes.
- —Creative Collaboration — co-creators’ tools develop shared creative understanding through coupled intent trajectories.
9. Architecture Comparison
| Dimension | Discrete Systems | MCP / A2A | Federated Learning | Multi-Agent RL | Mesh Cognition |
|---|---|---|---|---|---|
| Temporal model | Snapshot | Request-response | Snapshot | Discrete steps | Continuous trajectory |
| Coupling time | None (isolated) | Per-request | Offline (training) | Per-step | Real-time (inference) |
| Coordination | N/A | Client-server / server-mediated | Central aggregator | Central or decentralized | Peer-to-peer |
| State shared | N/A | Messages / task results | Gradients | Observations/rewards | CfC hidden states |
| Emergent properties | None | None | Improved model | Learned policies | Collective memory, stability, pattern detection |
| Privacy | Device-local | Full message exposure to server | Gradient exposure risk | Observation sharing | Opaque hidden vectors |
| Deployment | Single device | Requires server infrastructure | Training infrastructure | Training infrastructure | Local network, zero cloud |
10. Research Directions
| Direction | Description |
|---|---|
| Lateral CfC Connections | Retraining CfC models with explicit peer hidden state input dimensions, enabling learned coupling dynamics |
| Attention-Based Aggregation | Multi-head attention over peer hidden vectors for selective peer influence |
| Internet-Scale Mesh | Extending MMP beyond local networks via relay infrastructure for wide-area cognitive coupling |
| Heterogeneous Mesh | Projection layers mapping between CfC models of different architectures for cross-application mesh cognition |
| Differential Privacy | Formal privacy guarantees for exchanged hidden states while preserving mesh properties |
| Hierarchical Mesh | Multi-scale architecture where local meshes couple into regional meshes via aggregated super-states |
| Cross-Domain Handoff | Governed emotional trajectory transfer between applications with provenance preservation |
11. Conclusion
Mesh Cognition introduces a new paradigm for distributed AI: real-time cognitive coupling of on-device continuous-time neural networks through agent-to-agent hidden state exchange.
The framework makes three advances over the state of the art. First, it replaces discrete emotional classification with continuous-time trajectory modeling, enabling anticipatory intelligence that sees where emotion is going rather than where it is. Second, it introduces the Mesh Memory Protocol for live inference-time coupling — fundamentally different from federated learning’s offline gradient sharing or multi-agent RL’s discrete observation exchange. Third, the theoretical connection to Kuramoto synchronization provides analytical tools for predicting and engineering mesh behavior, including synchronization timescales, critical coupling thresholds, and emergent collective properties.
The contribution is not the underlying neural network — CfC is the work of Hasani et al. The contribution is the coupling: the protocol, the theoretical framework, and the production architecture that turns isolated CfC models into a distributed cognitive mesh. Any application running a CfC model can participate in mesh cognition by implementing the protocol.
We own the protocol, the architecture, and the production implementation. We believe this work opens a research direction at the intersection of continuous-time neural networks, distributed systems, and collective intelligence.
References
1. Hasani, R., Lechner, M., Amini, A., Liebenwein, L., Ray, A., Tschaikowski, M., ... & Rus, D. (2022). Closed-form continuous-time neural networks. Nature Machine Intelligence, 4(11), 992–1003.
2. Hasani, R., Lechner, M., Amini, A., Rus, D., & Grosu, R. (2021). Liquid time-constant networks. Proceedings of the AAAI Conference on Artificial Intelligence, 35(9), 7657–7666.
3. Kuramoto, Y. (1975). Self-entrainment of a population of coupled non-linear oscillators. International Symposium on Mathematical Problems in Theoretical Physics, 420–422.
4. Strogatz, S. H. (2000). From Kuramoto to Crawford: exploring the onset of synchronization in populations of coupled oscillators. Physica D, 143(1–4), 1–20.
5. McMahan, B., Moore, E., Ramage, D., Hampson, S., & y Arcas, B. A. (2017). Communication-efficient learning of deep networks from decentralized data. AISTATS, 1273–1282.
6. Foerster, J., Assael, Y., de Freitas, N., & Whiteson, S. (2016). Learning to communicate with deep multi-agent reinforcement learning. NeurIPS, 2137–2145.
7. Xu, H. (2025). EI-3 Emotional Intelligence Architecture. SYM.BOT Research Technical Specification v1.0.
8. Xu, H. (2025). Symbolic-Vector Attention Fusion for Semantic Alignment in Multi-Agent Cognitive Systems. SYM.BOT Research.
Implementation
SYM provides the infrastructure and protocol for mesh cognition. The coupling engine, SVAF evaluation, and xMesh collective intelligence are built into the SDK. Open source under Apache 2.0.
MeloTune on iOS joins the mesh with a 100-line service class using the Swift SDK. Claude Code on macOS joins via the SYM daemon. Any AI coding agent can integrate SYM by reading the README and adding the package — the coupling engine, SVAF evaluation, and xMesh collective intelligence are built in.
Intellectual Property
Mesh Cognition is original work by Hongwei Xu and SYM.BOT. The following remain proprietary: trained CfC models and training procedures, SYM transformation mechanisms, xMesh coupled dynamics, domain-specific product integrations (MeloTune, MeloMove), and production configurations.
Academic citation of this work is permitted and encouraged.
For partnership inquiries: info@sym.bot
Mesh Cognition, Mesh Memory Protocol, MMP, SYM, Synthetic Memory, xMesh, SVAF, MeloTune, and MeloMove are trademarks of SYM.BOT. © 2026 SYM.BOT. All Rights Reserved.