MCP Hub
Back to servers

Palinode

Git-versioned markdown memory across AI agents — search, save, compact, lint, audit.

Registry
Stars
20
Forks
7
Updated
Apr 30, 2026

Quick Install

uvx palinode
┌─ palinode ─┐
│ ░░░░░░░░░░ │
│ ▓▓▓▓▓▓▓▓▓▓ │
│ ██████████ │
└────────────┘

The memory substrate for AI agents and developer tools. Git-versioned, file-native, MCP-first.

Your agent's memory is a folder of markdown files. Palinode indexes them with hybrid search, compacts them with an LLM, and serves them through MCP — so the same memory works in Claude Code, Cursor, Windsurf, Zed, VS Code (Continue/Cline), and any other MCP-compatible editor. Bring your own Obsidian vault, or use Palinode as one: palinode init --obsidian /path/to/vault scaffolds a full vault with graph defaults, daily-notes wiring, and an LLM-maintained wiki contract. Enterprises can govern AI memory the same way they govern code. If every service crashes, cat still works.

A palinode is a poem that retracts what was said before and says it better. That's what memory compaction does.


Supported Platforms

PlatformSession Skill PathMCP Config
Claude Code CLI~/.claude/skills/~/.claude.json
Claude Desktop~/.claude/skills/claude_desktop_config.json
Cursor.cursor/skills/.cursor/mcp.json
VS Code + Claude (Continue / Cline)~/.claude/skills/see MCP-INSTALL-RECIPES.md
JetBrains + Claude~/.claude/skills/~/.claude.json
Codex CLIN/A (no skills)~/.codex/config.toml

All platforms share the same MCP server — install once on your server, connect from any IDE. See docs/MCP-SETUP.md and docs/MCP-INSTALL-RECIPES.md for per-client config snippets.


The Idea

Most agent memory is a black box. You can't read it, you can't diff it, you can't grep it when the vector DB is down. Palinode bets on plain files as the source of truth and builds everything else as a derived index.

Files (markdown + YAML frontmatter)
  ↓ watched
Index (SQLite-vec vectors + FTS5 keywords, single .db file)
  ↓ queried by
Interfaces (MCP server, REST API, CLI, OpenClaw plugin)
  ↓ compacted by
LLM (suggests updates → Palinode validates and writes them → git commits)

That's the whole architecture. One directory of .md files, one SQLite database, one API server. No Postgres, no Redis, no cloud dependency.


One Backend, Every Interface

Palinode doesn't care how you talk to it. The full toolkit — save, search, doctor, dedup-suggest, orphan-repair, diff, blame, rollback, and more — works through every interface:

InterfaceTransportBest For
MCP ServerStreamable HTTP or stdioClaude Code, Claude Desktop, Cursor, Windsurf, Zed, VS Code (Continue/Cline)
REST APIHTTP on :6340Scripts, webhooks, custom integrations
CLIWraps REST APICron jobs, SSH, shell scripts (8x fewer tokens than MCP)
PluginOpenClaw lifecycle hooksAgent frameworks with inject/extract patterns

Set up once on a server. Connect from any machine, any IDE, any agent framework. The MCP server is a pure HTTP client — it holds no state, no database connection, no embedder. Point it at the API and go.

{
  "mcpServers": {
    "palinode": { "type": "http", "url": "http://your-server:6341/mcp/" }
  }
}

That's the entire client config. Works with Claude Code, Claude Desktop, Cursor, Windsurf, Zed, and VS Code (Continue/Cline). palinode-mcp-sse serves streamable-HTTP at /mcp/ — the binary name is historical; use "type": "http", not "type": "sse". Always include the trailing slash in the URL. See docs/MCP-SETUP.md for editor-specific install recipes.


How It Works

Store — Typed markdown files (people, projects, decisions, insights) with YAML frontmatter. Git-versioned. Human-readable. Editable in Obsidian, VS Code, vim, or anything.

Index — A file watcher embeds with BGE-M3 and indexes with FTS5 as you save. Content-hash dedup skips re-embedding unchanged files (~90% savings). Single SQLite file, zero external services.

Search — Hybrid BM25 + vector search merged with Reciprocal Rank Fusion. Keyword precision when you need exact terms, semantic recall when you don't. Optional associative entity graph and prospective triggers.

Compact — Weekly consolidation where an LLM suggests structured memory updates and Palinode applies the validated result to your files. Every compaction is a git commit you can review, blame, or revert.

Auditgit blame any fact. git diff any change. rollback any mistake. These aren't just git-compatible files — palinode_diff, palinode_blame, and palinode_rollback are first-class tools your agent can call.


Getting started in 60 seconds (Claude Code)

Already have Palinode installed and palinode-api running? Drop it into any project in one command:

cd your-project
palinode init

That scaffolds:

  • .claude/CLAUDE.md — memory instructions for the agent (appended if one already exists)
  • .claude/settings.json — a SessionEnd hook that auto-captures on /clear, logout, and normal exit
  • .claude/hooks/palinode-session-end.sh — the hook script itself
  • .mcp.json — points Claude Code at the palinode MCP server

Open the project in Claude Code and your agent will search prior context on startup, save decisions as you work, and snapshot the session on /clear. No server restarts, no settings menus, no copy-paste.

Re-run with --dry-run to preview, --force to overwrite, or --no-mcp / --no-hook to scope what gets installed.


Quick Start

# Install
git clone https://github.com/phasespace-labs/palinode && cd palinode
python3 -m venv venv && source venv/bin/activate
pip install -e .

# Create your memory directory
mkdir -p ~/.palinode/{people,projects,decisions,insights,daily}
cd ~/.palinode && git init
cp /path/to/palinode/palinode.config.yaml.example palinode.config.yaml  # adjust path

# Start services
PALINODE_DIR=~/.palinode palinode-api        # REST API on :6340
PALINODE_DIR=~/.palinode palinode-watcher     # auto-indexes on file save
PALINODE_DIR=~/.palinode palinode-mcp-sse     # MCP server on :6341 (streamable-HTTP at /mcp/; optional)

# Verify
curl http://localhost:6340/status

Your memory directory is private. It contains personal data. Never make it public. The code repo contains zero memory files.

For a pre-populated demo, copy examples/sample-memory/ to ~/.palinode/.


Usage Examples

Save a decision, recall it later

# During a session — save a decision
palinode save --type Decision "Chose SQLite over Postgres for the cache layer. \
  Reason: no ops burden, single-file deployment, good enough for our scale."

# Next week — search for it
palinode search "database decision for cache"

End-of-session capture

# Agent calls at end of coding session
palinode session-end \
  --summary "Migrated auth from JWT to session tokens" \
  --decisions "Session tokens stored server-side, 24h expiry" \
  --blockers "Need to update mobile client auth flow"

Audit trail — who decided what and when

# Trace a fact back to when it was recorded
palinode blame decisions/auth-migration.md

# See what changed across all memory in the last week
palinode diff --days 7

Tools

25 tools available through every interface:

ToolWhat It Does
searchHybrid BM25 + vector search with category filter
saveStore a typed memory (person, decision, insight, project)
listBrowse memory files by type, filter by core status
readRead the full content of a memory file
ingestFetch a URL and save as research
statusHealth check — file counts, index stats, service status
entitiesEntity graph — cross-references between memories
consolidatePreview or run LLM-powered compaction
diffWhat changed in the last N days
blameTrace a fact back to the commit that recorded it
historyGit history for a file with diff stats and rename tracking
rollbackRevert a file to a previous commit (safe, creates new commit)
pushSync memory to a remote git repo
triggerProspective recall — auto-inject when a topic comes up
lintHealth scan — orphans, stale files, missing fields
session_endCapture summary, decisions, and blockers at end of session
promptList, show, or activate versioned LLM prompts
dedup_suggestBefore saving, surface existing files that overlap the draft
orphan_repairFind semantic matches for broken [[wikilinks]]
doctorFast diagnostic pass — 18+ checks across paths, services, config, index
doctor_deepFull diagnostic with canary write test (~10–15s)

Every tool is accessible as palinode_<name> via MCP, palinode <name> via CLI, or POST/GET /<name> via the REST API.


Stack

LayerChoiceWhy
Source of truthMarkdown + YAML frontmatterHuman-readable, git-versioned, portable
Vector indexSQLite-vec (embedded)No server, single file, zero config
Keyword indexSQLite FTS5 (embedded)BM25 for exact terms, zero dependencies
EmbeddingsBGE-M3 via OllamaLocal, private, no API key needed
APIFastAPILightweight, async, one process
MCPPython MCP SDK (Streamable HTTP)Works with every IDE over the network
CLIClick (wraps REST API)Shell-native, TTY-aware output
BehaviorPROGRAM.mdWhat to remember, how to extract, how to compact — edit one file to change all behavior

Memory File Format

---
id: project-palinode
category: project
name: Palinode
core: true
status: active
entities: [person/paul]
last_updated: 2026-04-05T00:00:00Z
summary: "Persistent memory for AI agents."
canonical_question: "What is Palinode and what does it do?"
---
# Palinode

Your content here. As detailed or brief as you want.
Files marked `core: true` are always in context.
Everything else is retrieved on demand via hybrid search.
The `canonical_question` field anchors the file to the question it answers, improving search relevance.

Open in Obsidian

Palinode stores every memory as a plain markdown file — which means your memory directory is already a valid Obsidian vault. Point Obsidian at the folder and you get graph view, backlinks, and Bases on top of Palinode's hybrid search and compaction. No sync job, no plugin to install, no two-source-of-truth problem.

palinode init --obsidian ~/palinode-vault

This scaffolds the vault directory layout, an _index.md Map of Content, a _README.md orientation page, and an opinionated .obsidian/ config (graph view colour-coded by category, daily-notes wired to daily/). Then open the directory in Obsidian.

The LLM follows a wiki-maintenance contract — it keeps entities: frontmatter and [[wikilinks]] in the note body in sync so the Obsidian graph stays accurate as new memories are saved. When you save a memory with entity references, Palinode appends an idempotent ## See also block linking them as wikilinks.

Two embedding-aware tools support wiki hygiene: palinode_dedup_suggest checks whether a draft overlaps an existing file before creating a duplicate, and palinode_orphan_repair finds semantic matches for broken [[wikilinks]]. Both are callable via MCP, CLI, and REST.

See docs/OBSIDIAN.md for the comprehensive guide: quickstart, wiki contract details, migration paths, and FAQ.


Diagnose with palinode doctor

Silent misconfiguration — a db_path pointing at the wrong file, a watcher indexing a stale directory, a phantom DB file — is the most common reason Palinode doesn't behave as expected after an upgrade or server move. palinode doctor catches this entire class of bugs.

palinode doctor

The command runs 18+ checks across paths, services, config consistency, index health, and disk state, and emits a structured report with a pass/warn/fail status for each. --fix mode applies safe automated repairs (creates missing directories, appends the CLAUDE.md Palinode block) — it never moves user data; phantom DB files and DB-path mismatches print suggested mv commands but never execute them.

Run palinode doctor after every install, upgrade, or server migration. See docs/DOCTOR.md for the full check catalog and --fix reference.


Configuration

All behavior is in palinode.config.yaml:

memory_dir: "~/.palinode"
ollama_url: "http://localhost:11434"
embedding_model: "bge-m3"

recall:
  search:
    top_k: 5
    threshold: 0.4
  core:
    max_chars_per_file: 3000

search:
  hybrid_enabled: true
  hybrid_weight: 0.5         # 0.0 = vector only, 1.0 = BM25 only

consolidation:
  llm_model: "llama3.1:8b"   # any chat model that outputs JSON
  llm_url: "http://localhost:11434"
  llm_fallbacks:              # tried in order if primary fails
    - model: "qwen2.5:14b-instruct"
      url: "http://localhost:11434"

All models are swappable. Any Ollama embedding model, any OpenAI-compatible chat endpoint. See palinode.config.yaml.example for the full reference.


Requirements

  • Python 3.11+
  • Ollama with bge-m3 (ollama pull bge-m3)
  • Git

Optional: a chat model for consolidation (any 7B+ works), OpenClaw for agent plugin hooks.


API Reference

MethodPathDescription
GET/statusHealth check + stats
POST/searchHybrid search with filters
POST/search-associativeEntity graph traversal
POST/saveCreate a typed memory file
POST/ingest-urlFetch URL, save as research
GET/POST/triggersProspective recall triggers
POST/consolidateRun or preview compaction
GET/listBrowse files by type
GET/read?file_path=...Read a memory file
GET/history/{file_path}Git log for a file
GET/diffRecent changes
GET/blame/{file_path}Git blame
POST/rollbackRevert a file
POST/pushPush to git remote
POST/reindexRebuild indices
POST/session-endCapture session summary
POST/lintHealth scan

Design Principles

  1. Files are truth. Not databases, not vector stores. Markdown files that humans can read, edit, and version with git.

  2. Typed, not flat. People, projects, decisions, insights — each has structure. This enables reliable retrieval and consolidation.

  3. Consolidation, not accumulation. 100 sessions should produce 20 well-maintained files, not 100 unread dumps.

  4. Invisible when working. The human talks to their agent. Palinode works behind the scenes.

  5. Graceful degradation. Vector index down? Read files directly. Embedding service down? Grep. Machine off? It's a git repo, clone it anywhere.

  6. Zero taxonomy burden. The system classifies. The human reviews. If the human has to maintain a taxonomy, the system dies.


What's Unique

  • Your data, your files — No accounts, no cloud dependency, no vendor lock-in. Your memory is markdown files in a directory you control. Export is cp. Backup is git push. Whatever happens to any tool in this ecosystem, your data is plain text on your filesystem.
  • Cross-IDE memory — Your memory lives in one place. Connect from Claude Code, Cursor, Windsurf, Zed, or any MCP-compatible editor. Switch IDEs without losing context.
  • Git operations as agent toolsdiff, blame, rollback, push exposed via MCP. No other system makes git ops callable by the agent.
  • Reviewable compaction — The LLM suggests structured memory updates and Palinode applies validated changes with full git history. Every compaction is a reviewable git commit.
  • Per-fact addressability<!-- fact:slug --> IDs inline in markdown, invisible in rendering, preserved by git, targetable by compaction.
  • 4-phase injection — Core (always) + Topic (per-turn search) + Associative (entity graph) + Triggered (prospective recall).
  • Multi-transport MCP — stdio for local, Streamable HTTP for remote. One server, any IDE on any machine.
  • If everything crashes, cat still works.

Acknowledgments

Palinode builds on ideas from Karpathy's LLM Knowledge Bases, Letta (tiered memory), and LangMem (typed schemas + background consolidation). See docs/ACKNOWLEDGMENTS.md for the full list.

See also the epistemic integrity discussion in the Karpathy gist thread — particularly the problem of LLM wikis that "synthesise without citing, drift from sources without knowing it, and present false certainty where disagreement exists." Git-based provenance is Palinode's answer to that problem.

If you know of prior art we missed, please open an issue.


License

MIT — Privacy Policy


Built by Paul Kyle with help from AI agents who use Palinode to remember building Palinode.

Reviews

No reviews yet

Sign in to write a review