MCP Hub
Back to servers

Open Brain MCP Server

A personal semantic knowledge base that enables storing, searching, and retrieving memories and work history using natural language. It features vector-based search, tool discovery via a registry, and indexing of Cursor agent transcripts using Supabase or Postgres.

glama
Updated
Mar 8, 2026

Open Brain MCP Server

A personal semantic knowledge base exposed as MCP tools. Store, search, and retrieve memories using natural language across Cursor, Claude Desktop, or any MCP-compatible client.

Tools

ToolDescription
search_brainSemantic similarity search across all memories
add_memoryEmbed and store a new piece of knowledge
recallFiltered list retrieval by source, tags, or date — no embedding needed
forgetDelete a memory by UUID
brain_statsCounts and breakdown by source
discover_toolsSemantic search across the tool registry (Toolshed)
index_cursor_chatsIndex Cursor agent transcripts as searchable work history
search_work_historyKeyword search across raw Cursor transcript files

Setup

cd mcp-server
npm install
cp .env.example .env
# edit .env with your credentials

Configuration

All configuration is via environment variables in .env.

Required (always)

VariableDescription
OPENROUTER_API_KEYUsed to generate embeddings via OpenRouter

Database backend

The server supports two database backends. Set DB_BACKEND to choose (default: supabase).

Supabase (default)

DB_BACKEND=supabase
SUPABASE_URL=https://your-project.supabase.co
SUPABASE_SERVICE_ROLE_KEY=your_service_role_key

Raw Postgres

Point the server at any Postgres instance with the pgvector extension and the brain_memories schema applied.

DB_BACKEND=postgres
DATABASE_URL=postgresql://user:password@host:5432/dbname

Both backends use the same schema and the same match_memories SQL function. See Database Schema below.

Optional

VariableDefaultDescription
EMBEDDING_MODELopenai/text-embedding-3-smallOpenRouter embedding model
EMBEDDING_DIMENSIONS1536Must match the model output and schema
MCP_HTTP_PORT3100Port for the HTTP/SSE transport
CURSOR_TRANSCRIPTS_DIRPath to Cursor agent-transcripts directory; enables index_cursor_chats and search_work_history

Running

stdio transport (Cursor / Claude Desktop)

npm run dev:stdio       # development (tsx)
npm run start:stdio     # production (compiled JS)

Add to .cursor/mcp.json:

{
  "mcpServers": {
    "open-brain": {
      "command": "npx",
      "args": ["tsx", "/path/to/mcp-server/src/stdio.ts"],
      "env": {
        "DB_BACKEND": "supabase",
        "SUPABASE_URL": "...",
        "SUPABASE_SERVICE_ROLE_KEY": "...",
        "OPENROUTER_API_KEY": "..."
      }
    }
  }
}

To use raw Postgres instead, swap the env block:

{
  "env": {
    "DB_BACKEND": "postgres",
    "DATABASE_URL": "postgresql://user:pass@host:5432/dbname",
    "OPENROUTER_API_KEY": "..."
  }
}

HTTP / SSE transport (network-accessible)

npm run dev:http        # development
npm run start:http      # production

Endpoints:

EndpointDescription
GET /sseSSE stream (MCP SSE transport)
POST /messagesMCP message handling
GET /healthHealth check

Database Schema

Both backends require the following on the Postgres instance:

  • pgvector extension (for halfvec type)
  • brain_memories table
  • match_memories SQL function
  • brain_stats view

Schema is managed via the migrations in supabase/migrations/. For a raw Postgres instance, run the migration files in order against your database:

001_initial_schema.sql
002_open_brain.sql
003_brain_rls.sql
004_vector_halfvec.sql
005_uuid_default.sql
006_storage_fillfactor.sql
007_column_reorder.sql

brain_memories table

CREATE TABLE brain_memories (
  id              uuid          NOT NULL DEFAULT gen_random_uuid(),
  created_at      timestamptz            DEFAULT NOW(),
  updated_at      timestamptz            DEFAULT NOW(),
  source          text          NOT NULL DEFAULT 'manual',
  content         text          NOT NULL,
  tags            text[]                 DEFAULT '{}',
  source_metadata jsonb                  DEFAULT '{}',
  embedding       halfvec(1536)
);

Valid source values: manual, telegram, cursor, api, conversations, knowledge, work_history, toolshed.


Toolshed

The Toolshed (discover_tools) solves the "tool explosion" problem. Instead of injecting hundreds of MCP tool schemas into the agent context, the agent calls discover_tools with a natural language query and gets back only the tools relevant to the current task.

Tool descriptions are loaded from tool-registry.json and embedded into brain_memories (source toolshed) at startup. Indexing is idempotent.


Work History Indexing

When CURSOR_TRANSCRIPTS_DIR is set, two additional tools are enabled:

  • index_cursor_chats — reads JSONL transcript files from the directory, embeds each session summary, and stores it as a work_history memory. Re-running is idempotent (already-indexed sessions are skipped).
  • search_work_history — keyword search across raw transcript files for exact phrase matching. Complements the semantic search_brain.
CURSOR_TRANSCRIPTS_DIR=/Users/you/.cursor/projects/.../agent-transcripts

Development

npm run build           # compile TypeScript to dist/
npm run dev:stdio       # run stdio server with tsx (hot reload)
npm run dev:http        # run HTTP server with tsx (hot reload)

Reviews

No reviews yet

Sign in to write a review