MCP Hub
Back to servers

cortex-mcp

Persistent memory for AI coding assistants. Injects context from past sessions into every LLM request.

Updated
Feb 24, 2026

Quick Install

npx -y cortex-mcp

Cortex MCP — Persistent AI Memory

npm version npm downloads CI License: MIT Node.js

Give your AI coding assistant a brain that remembers across sessions.

Cortex is an MCP (Model Context Protocol) server that provides persistent, intelligent memory to any AI coding assistant — Cursor, Claude Code, Windsurf, Cline, or any MCP-compatible tool.

What's New in v1.2

  • 3-5x faster force_recall — Brain layers 6-12 now run in parallel (was 15-19s, now 3-5s)
  • Result caching — Repeated force_recall calls return instantly (30s TTL)
  • 7 SQL queries → 1 — Context builder consolidated for faster startup
  • 17 fixes across performance, robustness, and code quality
  • Shared tag utility — Cleaner codebase, single source of truth for tag extraction

See CHANGELOG.md for full details.

The Problem

Every time you start a new conversation, your AI assistant forgets everything:

  • Coding conventions you already explained
  • Bugs you already fixed
  • Decisions you already made
  • What files you were working on

Cortex solves this. It stores, ranks, and proactively recalls context so your AI never starts from zero.

Quick Start

Option 1: npm install (recommended)

npm install -g cortex-mcp

Then add to your MCP config:

{
  "mcpServers": {
    "cortex": {
      "command": "cortex-mcp",
      "transportType": "stdio"
    }
  }
}

Option 2: Clone from source

git clone https://github.com/jaswanthkumarj1234-beep/cortex-mcp.git
cd cognitive-memory
npm install
npm run build

Then add to your MCP config:

{
  "mcpServers": {
    "cortex": {
      "command": "node",
      "args": ["<path-to>/cognitive-memory/dist/mcp-stdio.js"],
      "transportType": "stdio"
    }
  }
}

Restart your IDE and Cortex is active.

Option 3: Direct Download (Binaries)

No Node.js required. Download from the latest release:

Then configure your MCP client to run the executable directly.

Try the Demo

See Cortex in action without any AI client:

npm run demo          # Interactive demo — store, search, dedup
npm run benchmark     # Performance benchmark — write, read, FTS ops/sec

IDE-Specific Setup Guides

Step-by-step instructions for your IDE: Cursor · Claude Code · Windsurf · Cline · Copilot

Recipes & Use Cases

10 practical examples: docs/recipes.md — conventions, bug tracking, anti-hallucination, team onboarding, and more.

Features

Why Cortex?

FeatureCortexOther MCP memory servers
RetrievalHybrid (FTS + Vector + Graph)Usually FTS only
Auto-learningExtracts memories from AI responsesManual store only
Hallucination detectionverify_code + verify_filesNo
Git integrationAuto-captures commits, diffs, branch contextNo
Cognitive featuresDecay, attention, contradiction detection, consolidationNo
Brain pipeline12+ layer context injectionSimple key-value recall
Setupnpm i -g cortex-mcp + 1 config lineVaries
Offline100% local SQLite (no API key needed)Often requires API

16 MCP Tools

ToolPurpose
force_recallFull brain dump at conversation start (12+ layer pipeline)
recall_memorySearch memories by topic (FTS + vector + graph)
store_memoryStore a decision, correction, convention, or bug fix
quick_storeOne-liner memory storage with auto-classification
auto_learnExtract memories from AI responses automatically
scan_projectScan project structure, stack, git, exports, architecture
verify_codeCheck if imports/exports/env vars actually exist
verify_filesCheck if file paths are real or hallucinated
get_contextGet compressed context for current file
get_statsMemory database statistics
list_memoriesList all active memories
update_memoryUpdate an existing memory
delete_memoryDelete a memory
export_memoriesExport all memories to JSON
import_memoriesImport memories from JSON
health_checkServer health check

12+ Layer Brain Pipeline

Every conversation starts with force_recall, which runs 12+ layers (layers 6-12 run in parallel for speed):

LayerFeatureParallel?
0Session management — track what you're working on
1Maintenance — decay, boost corrections, consolidate
2Attention focus — detect debugging vs coding vs reviewing
3Session continuity — resume where you left off
4Hot corrections — frequently-corrected topics
5Core context — corrections, decisions, conventions, bug fixes
6Anticipation — proactive recall based on current file
7Temporal — what changed today, yesterday, this week
8Workspace git — branch, recent commits, diffs
8.5Git memory — auto-capture commits + file changes
9Topic search — FTS + decay + causal chain traversal
10Knowledge gaps — flag files with zero memories
11Export map — all functions/classes (anti-hallucination)
12Architecture graph — layers, circular deps, API endpoints

Cognitive Features

  • Confidence decay — old unused memories fade, frequently accessed ones strengthen
  • Attention ranking — debugging context boosts bug-fix memories, coding boosts conventions
  • Contradiction detection — new memories automatically supersede conflicting old ones
  • Memory consolidation — similar memories merge into higher-level insights
  • Cross-session threading — related sessions link by topic overlap
  • Learning rate — topics corrected 3+ times get CRITICAL priority

Performance & Reliability

  • force_recall latency: ~3-5s first call, instant on cache hit (v1.2 — was 15-19s)
  • recall_memory latency: <1ms average (local SQLite + WAL)
  • Throughput: 1800+ ops/sec (verified via Soak Test)
  • Stability: Zero memory leaks over sustained operation
  • Protocol: Full JSON-RPC 2.0 compliance (passed E2E suite)
  • CI: Tested on Node 18, 20, 22 with lint + build + E2E on every push

Anti-Hallucination

  • Deep export map scans every source file for all exported functions, classes, types
  • When AI references a function that doesn't exist, verify_code suggests the closest real match
  • Architecture graph shows actual project layers and dependencies

Optional: LLM Enhancement

Cortex works fully without any API key. Optionally, add an LLM key for smarter memory extraction:

# Option 1: OpenAI
set OPENAI_API_KEY=sk-your-key

# Option 2: Anthropic
set ANTHROPIC_API_KEY=sk-ant-your-key

# Option 3: Local (Ollama, free)
set CORTEX_LLM_KEY=ollama
set CORTEX_LLM_BASE_URL=http://localhost:11434

Memory Types

TypeUse For
CORRECTION"Don't use var, use const"
DECISION"We chose PostgreSQL over MongoDB"
CONVENTION"All API routes start with /api/v1"
BUG_FIX"Fixed race condition in auth middleware"
INSIGHT"The codebase uses a layered architecture"
FAILED_SUGGESTION"Tried Redis caching, too complex for this project"
PROVEN_PATTERN"useDebounce hook works well for search inputs"

Architecture

src/
├── server/          # MCP handler (12+ layer brain pipeline) + dashboard
├── memory/          # 17 cognitive modules (decay, attention, anticipation, etc.)
├── scanners/        # Project scanner, code verifier, export map, architecture graph
├── db/              # SQLite storage, event log
├── security/        # Rate limiter, encryption
├── hooks/           # Git capture
├── config/          # Configuration
└── cli/             # Setup wizard

Pricing & Features

Cortex is open core. The basic version is free forever. To unlock deep cognitive features, upgrade to PRO.

FeatureFREE (npm install)PRO ($9/mo)
Memory Capacity∞ Unlimited∞ Unlimited
Brain Layers12+ (Full)12+ (Full)
All 16 ToolsYesYes
Auto-LearnYesYes
Export Map (Anti-Hallucination)YesYes
Architecture GraphYesYes
Git MemoryYesYes
Confidence DecayYesYes
Contradiction DetectionYesYes
Priority SupportYes

Launch Edition: All features are currently free. Get started now and lock in lifetime access.

Activate PRO

  1. Get a license key from your Cortex AI Dashboard
  2. Set it in your environment:
    # Option A: Environment variable
    export CORTEX_LICENSE_KEY=CORTEX-XXXX-XXXX-XXXX-XXXX
    
    # Option B: License file
    echo CORTEX-XXXX-XXXX-XXXX-XXXX > ~/.cortex/license
    
  3. Restart your IDE / MCP server.

Architecture

graph TB
    AI["AI Client (Cursor, Claude, etc.)"]
    MCP["MCP Server (stdio)"]
    ML["Memory Layer"]
    DB["SQLite Database"]
    SEC["Security Layer"]
    API["License API"]

    AI -->|"JSON-RPC via stdio"| MCP
    MCP --> ML
    ML --> DB
    MCP --> SEC
    SEC -->|"verify license"| API

    subgraph "Memory Layer"
        ML --> EMB["Embedding Manager"]
        ML --> AL["Auto-Learner"]
        ML --> QG["Quality Gates"]
        ML --> TD["Temporal Decay"]
    end

    subgraph "Retrieval"
        ML --> HR["Hybrid Retriever"]
        HR --> VS["Vector Search"]
        HR --> FTS["Full-Text Search"]
        HR --> RR["Recency Ranker"]
    end

FAQ / Troubleshooting

Cortex doesn't start or my AI client can't connect
  1. Make sure Node.js >=18 is installed: node --version
  2. Try running manually: npx cortex-mcp — check stderr for errors
  3. Check if another Cortex instance is running on the same workspace
  4. Delete the cache and restart: rm -rf .ai/brain-data
Memories aren't being recalled
  1. Confirm your AI client's system prompt includes the Cortex rules (run cortex-setup to install them)
  2. Check the dashboard at http://localhost:3456 to see stored memories
  3. Verify memories exist: the AI should call force_recall at conversation start
"better-sqlite3" build errors on install

better-sqlite3 requires a C++ compiler:

  • Windows: Install Visual Studio Build Tools
  • macOS: Run xcode-select --install
  • Linux: Run sudo apt-get install build-essential python3
License key not working
  1. Verify the key format: CORTEX-XXXX-XXXX-XXXX-XXXX
  2. Check your internet connection (license is verified online on first use)
  3. Clear the cache: rm ~/.cortex/license-cache.json
  4. Try setting it via environment variable: export CORTEX_LICENSE_KEY=CORTEX-...
Dashboard not loading
  1. Default port is 3456. Check if something else is using it: lsof -i :3456
  2. Set a custom port: export CORTEX_PORT=4000
  3. The dashboard URL is shown in your AI client's stderr on startup

Contributing

See CONTRIBUTING.md for development setup, coding conventions, and the PR process.

License

MIT

Reviews

No reviews yet

Sign in to write a review