MCP Hub
Back to servers

understanding-graph

Persistent reasoning memory for AI agents via typed cognitive nodes and supersedes edges.

Registry
Updated
Apr 8, 2026

Quick Install

npx -y understanding-graph

Understanding Graph: A Reasoning-Capture Architecture for AI Memory

Persistent memory for AI agents. Shared cognition through stigmergy.

Paper npm version MCP Registry License: MIT

Understanding Graph is an MCP server that gives AI agents structured, persistent memory. Unlike knowledge bases that store facts, it stores the reasoning process -- tensions, surprises, decisions, and how beliefs evolved over time. Multiple agents coordinate through the graph itself: each agent reads what others have written, builds on it, and leaves traces for the next -- stigmergy.

Why Understanding Graph?

Traditional MemoryUnderstanding Graph
Stores factsStores comprehension
"User prefers dark mode""User switched to dark mode after eye strain -- tension between aesthetics and comfort resolved toward comfort"
Flat retrievalReasoning trails
Forgets contextPreserves the why
Single agentMulti-agent coordination through shared graph

Core insight: AI agents don't just need to remember facts -- they need to remember how they arrived at conclusions so they can build on previous reasoning. When multiple agents share a graph, they coordinate without direct communication.


Quick Start

Claude Code (zero-install)

Add understanding-graph to Claude Code with one command -- no global install, nothing to clone:

claude mcp add ug -- npx -y understanding-graph mcp

npx -y downloads, caches, and runs the package on first invocation. After this, ug is available as an MCP server in every Claude Code session.

Claude Code with agent teams (one-command project setup)

For projects where you want agent teams to share the graph automatically, run the init flow inside the project directory:

cd your-project
npx -y understanding-graph init

This creates:

  • .claude/settings.local.json -- MCP server config (with agent teams enabled)
  • CLAUDE.md -- Instructions that all agents and teammates follow automatically
  • projects/default/ -- Graph storage directory

Now open Claude Code. Every session (and every agent team teammate) shares the same graph.

Per-client setup guides: integrations/claude-code.md · integrations/claude-desktop.md · integrations/cursor.md · integrations/mcporter.md

Claude Desktop

macOS: ~/Library/Application Support/Claude/claude_desktop_config.json Windows: %APPDATA%\Claude\claude_desktop_config.json

{
  "mcpServers": {
    "understanding-graph": {
      "command": "npx",
      "args": ["-y", "understanding-graph", "mcp"],
      "env": {
        "PROJECT_DIR": "/path/to/your/projects"
      }
    }
  }
}

Cursor / Windsurf

Add to your MCP config:

{
  "mcpServers": {
    "understanding-graph": {
      "command": "npx",
      "args": ["-y", "understanding-graph", "mcp"],
      "env": {
        "PROJECT_DIR": "/path/to/your/projects"
      }
    }
  }
}

Web UI / 3D visualization (build from source)

The npm package ships only what the MCP server needs (~1.5 MB) so npx -y understanding-graph mcp stays fast. The web UI and 3D visualization require the frontend bundle (~160 MB of three.js + react + onnxruntime), which is not in the published tarball for v0.1.0. To run the UI, clone the repo:

git clone https://github.com/emergent-wisdom/understanding-graph.git
cd understanding-graph
npm install
npm run build
npm run start:web
# open http://localhost:3000

Optional: enable embedding-based search

graph_semantic_search, graph_similar, graph_semantic_gaps, and graph_backfill_embeddings all rely on @xenova/transformers (a local embedding model, ~160 MB once compiled). It is declared as an optional peer dependency so the default install stays small. If you need those tools:

npm install -g @xenova/transformers

Without it, the rest of the graph (skeleton, history, batch, supersede, semantic_search via metadata, find_by_trigger, etc.) works fine — the embedding tools will return a clear error if you call them without installing the peer.


How It Works

Every mutation goes through graph_batch with a required commit_message — git for cognition. The batch is wrapped in a SQLite transaction: if any operation fails, the entire batch rolls back as if it never ran. Nodes are never deleted, only superseded. The commit stream becomes a metacognitive log that other agents can read to understand what happened and why — each node's commit message becomes its Origin Story, the agent's inner monologue at the moment of creation.

1. project_switch("my-project")        # Load (or create) a project
2. graph_skeleton()                     # Orient yourself (~150 tokens)
3. graph_history()                      # See what other agents did recently
4. graph_semantic_search({ query })     # Find relevant past reasoning (optional embeddings)
5. graph_batch({ commit_message, ... }) # Mutate with intent — atomic

Atomic commits

graph_batch is the only mutation entry point. Inside one batch you can chain graph_add_concept, graph_connect, graph_question, graph_supersede, doc_create, and others. The pre-validation check accepts both ID and title references for graph_connect, and computes transitive reachability (so a chain A → B → existing is valid even though A doesn't directly touch existing). On any failure mid-batch, the entire transaction rolls back; no half-state.

Cross-project references

A graph node in one project can reference a node in another project via graph_add_reference({ refProject, refNodeId }). Other projects can then read it without switching via graph_lookup_external or find it by ID alone via graph_global_lookup. This is the substrate for the Hierarchical Understanding Graph used by the entangled-alignment chronological annotation pipeline, where eras and documents draw cross-references.


Core Concepts

Nodes (Understanding Units)

Each node captures a moment of comprehension with a trigger marking why it was created:

Triggers are cognitive acts, not categories — they capture why the agent created the node at this exact moment, not what kind of thing it is. The seven you'll use most often:

TriggerWhen to Use
foundationCore concepts, axioms, starting points
surpriseUnexpected findings, contradicts prior belief
tensionConflict between ideas, unresolved
consequenceDownstream implication
questionOpen question to explore
decisionChoice made between alternatives, with rationale
predictionForward-looking belief that can be validated later

Less common but available: hypothesis, model, evaluation, analysis, experiment, serendipity, repetition, randomness, reference, library. The thinking trigger is reserved for the synthesizer agent and rejected for normal use. The full set of 18 trigger types is documented in the understanding-graph paper (Section 3.1) and described as the minimum count for cognitive distinguishability — collapsing them would dissolve the Cognitive Guardrail.

Edges (Connections)

Edge TypeMeaning
supersedesNew understanding replaces old
contradictsIdeas in conflict
refinesAdds precision to existing understanding
learned_fromAttribution of insight
answers / questionsResolves or raises questions
containsParent-child hierarchy
nextSequential ordering

Documents

Structured content (notes, papers, code) with thinking nodes interleaved -- capturing comprehension as it happens.

Projects

Isolated graphs for different contexts. Each project has its own SQLite database.


Tools Overview

50+ top-level tools listed via tools/list, plus additional batch-only operations callable through graph_batch (click to expand)

Batch Operations

ToolPurpose
graph_batchExecute multiple operations as an atomic commit with a required commit_message. Wrapped in a SQLite transaction: if any operation fails, the entire batch is rolled back. The commit_message is preserved as the node's Origin Story — future agents reading those nodes see not just the content but the intent that created it.

Concept & Node Management

ToolPurpose
graph_add_conceptAdd new concept with duplicate detection
graph_questionCreate question node for exploration
graph_reviseUpdate concept understanding
graph_supersedeReplace outdated concept
graph_add_referenceAdd external/cross-project references
graph_renameRename node (updates soft references)
graph_archiveSoft-delete preserving history
node_set_metadataSet arbitrary metadata on nodes
node_get_metadataRetrieve node metadata
node_set_triggerChange node classification
node_get_revisionsGet understanding evolution history

Connection Management

ToolPurpose
graph_connectCreate edges between concepts
graph_answerRecord answer to a question node
graph_disconnectRemove/archive edges
edge_updateUpdate edge type or explanation
edge_get_revisionsGet relationship history

Reading & Analysis

ToolPurpose
graph_skeletonStructural overview (~150 tokens)
graph_contextSurrounding context for a concept
graph_context_regionContext for multiple related nodes
graph_semantic_searchFind nodes by meaning
graph_similarFind conceptually similar nodes
graph_find_by_triggerFind nodes by type
graph_analyzeConcept and pattern frequencies
graph_semantic_gapsFind disconnected concepts
graph_scoreGraph health metrics
graph_pathReasoning path between concepts
graph_centralityMost influential concepts
graph_thermostatRegulate graph density
graph_historyCommit history and changes

Synthesis & Exploration

ToolPurpose
graph_discoverSerendipity pipeline with chaos injection
graph_randomRandom insights from graph
graph_serendipityRecord synthesized connection
graph_validateValidate proposed connections
graph_chaosInject controlled randomness
graph_decideEvaluate competing concepts
graph_evaluate_variationsCompare alternative ideas

Document Operations

ToolPurpose
doc_createCreate document with content
doc_reviseModify document text
doc_insert_thinkingInsert thinking node in document
doc_append_thinkingAppend thinking at end

Source Reading

ToolPurpose
source_loadLoad text for staged reading
source_readRead next portion, auto-create nodes
source_positionGet reading progress
source_listList loaded sources
source_exportExport with thinking interleaved
source_commitRecord thinking during reading

Project Management

ToolPurpose
project_switchSwitch active project
project_listList available projects

Cross-Project

ToolPurpose
graph_lookup_externalLook up node in another project
graph_list_externalList accessible external projects
graph_find_by_referenceFind nodes referencing a concept
graph_resolve_referencesVerify cross-project references
graph_global_lookupSearch across all projects

Thematic System

ToolPurpose
theme_createDefine theme with activation zones
theme_activateEnter activation zone
theme_landingMark pause point
theme_get_activeGet active themes
theme_check_alignmentValidate prose alignment
theme_deactivateLeave activation zone

Multi-Agent Coordination (Solver)

ToolPurpose
solver_spawnRegister specialized solver agent
solver_delegatePost task to solver queue
solver_claim_taskClaim pending task (worker mode)
solver_complete_taskSubmit task results
solver_listList registered solvers
solver_queue_statusTask queue statistics

Multi-Agent with Claude Code Agent Teams

Understanding Graph is designed as the shared memory layer for Claude Code Agent Teams. After running npx understanding-graph init, every teammate in an agent team automatically shares the same graph -- stigmergy out of the box.

How it works

You: "Create an agent team to research and implement auth for this app"

Claude (Team Lead):
  ├── Researcher teammate   ─── reads/writes shared graph ───┐
  ├── Backend teammate       ─── reads/writes shared graph ───┤  Same Understanding Graph
  ├── Security teammate      ─── reads/writes shared graph ───┤  (via MCP)
  └── synthesizes findings from graph_history()               ┘
  1. init creates the CLAUDE.md -- Every teammate loads it automatically, so all agents know to use graph_skeleton() to orient, graph_batch for mutations, and graph_history() to read the metacognitive trail.
  2. Commit messages are the coordination layer -- Each graph_batch requires a commit_message. When the Security teammate writes "Security Agent: found JWT stored in localStorage -- tension between convenience and XSS risk", the Backend teammate sees it via graph_history() and acts on it.
  3. Triggers classify contributions -- Teammates tag their nodes (tension, question, decision, surprise), making it easy to find what matters: "show me all unresolved tensions" or "what questions are still open?"
  4. No direct messaging needed -- Teammates coordinate through the graph itself. The researcher leaves question nodes; the backend agent finds them via graph_find_by_trigger and creates answers edges.

Getting started with a swarm

cd your-project
npx understanding-graph init     # one-time setup

Then in Claude Code:

Create an agent team with 3 teammates to [your task].
Each teammate should read graph_skeleton() first to orient,
then use graph_batch with descriptive commit messages so
the team can coordinate through the shared understanding graph.

Long-running coordination (solver system)

For tasks that span multiple sessions or need async handoff beyond a single team:

ToolPurpose
solver_spawnRegister a specialist (e.g., "SecurityReviewer", "ArchiveKeep")
solver_delegatePost a task to the queue
solver_claim_taskPick up pending work (worker mode)
solver_complete_taskSubmit results
solver_lock / solver_unlockPrevent conflicts on shared nodes

The solver system persists in the SQLite database, so tasks survive across sessions. One team can delegate work that a future team picks up.


Architecture

packages/
  core/          # Graph logic, SQLite storage, embeddings
  mcp-server/    # MCP server (~50 listed tools + batch-only operations)
  web-server/    # REST API + serves frontend
  frontend/      # 3D visualization (React + Three.js)

Stack:

  • SQLite + better-sqlite3 -- Persistent storage
  • Graphology -- In-memory graph operations
  • MCP Protocol -- Agent integration
  • Transformers.js -- Local embeddings for semantic search

Development

git clone https://github.com/emergent-wisdom/understanding-graph.git
cd understanding-graph
npm install
npm run build
npm run start:web    # Web UI at http://localhost:3000

Dev mode

# Terminal 1: Web server with hot reload
npm run dev:web

# Terminal 2: Frontend dev server
cd packages/frontend && npm run dev

Environment Variables

VariableDefaultDescription
PROJECT_DIR./projectsWhere to store project data
PORT3000Web server port
ANTHROPIC_API_KEY--For autonomous workers (optional)
TOOL_MODEfullTool exposure: reading, research, or full
DEFAULT_PROJECTdefaultProject loaded on startup

The Five Laws

  1. Git for Cognition — Nodes are never deleted, only superseded. The supersession edge preserves the epistemic journey: a future agent reading the chain learns not just the current belief but the path from the wrong belief to the right one. Every graph_batch requires a commit_message that becomes the node's Origin Story.
  2. PURE Standard — Quality gates: Parsimonious, Unique, Realizable, Expansive. Non-compensatory: any RED gate halts.
  3. Graph-First Context — Always check existing graph before creating new nodes. Duplicate detection is a forcing function, not a check: it pulls prior cognition into the current moment.
  4. Synthesize, Don't Transcribe — Capture implications and tensions, not raw facts. The graph is for your understanding, not your input.
  5. Delegate by Default — Break complex tasks into sub-tasks via the solver system; let specialized agents (Skeptic, Connector, Synthesizer) coordinate stigmergically through the graph.

Using with sema

Understanding Graph gives your agents shared episodic memory — the reasoning trail behind a decision. Sema gives them shared semantic memory — a content-addressed vocabulary of cognitive patterns. They compose:

# Add both to Claude Code
claude mcp add ug   -- npx -y understanding-graph mcp
claude mcp add sema -- uvx --from semahash sema mcp

With both installed, an agent can:

  1. Reference a sema pattern hash (e.g. StateLock#7859) inside an understanding-graph node's mechanism field to pin the meaning of a coordination primitive.
  2. Use graph_semantic_search to find all graph nodes that reference a given sema pattern, across projects and agent teams.
  3. Call sema_handshake to verify that two agents share the same definition of a pattern before building on each other's thinking in the graph — the fail-closed handshake prevents silent semantic drift.

Full walkthrough: docs/using-with-sema.md

Coding inside the graph

A concrete end-to-end example: build a Bloom filter entirely as graph nodes — design as foundation + decision concepts grounded in sema Check#1544, implementation as a .py doc tree, empirical verification as an evaluation node — then doc_generate it to a runnable file whose self-test prints AcceptSpec#70dd: PASS.

See docs/coding-inside-the-graph.md.


License

MIT -- LICENSE

GitHub: emergent-wisdom/understanding-graph npm: understanding-graph MCP Protocol: modelcontextprotocol.io

Reviews

No reviews yet

Sign in to write a review