MCP Hub
Back to servers

codecortex

Persistent codebase knowledge layer for AI agents. Pre-digests codebases into structured knowledge (symbols, dependency graphs, co-change patterns, architectural decisions) and serves via MCP. 28 languages, 14 tools, ~85% token reduction.

glama
Stars
1
Forks
1
Updated
Mar 5, 2026

CodeCortex

Persistent codebase knowledge layer for AI agents. Your AI shouldn't re-learn your codebase every session.

Website · npm · GitHub

The Problem

Every AI coding session starts from scratch. When context compacts or a new session begins, the AI re-scans the entire codebase. Same files, same tokens, same wasted time. It's like hiring a new developer every session who has to re-learn everything before writing a single line.

The data backs this up:

The Solution

CodeCortex pre-digests codebases into layered knowledge files and serves them to any AI agent via MCP. Instead of re-understanding your codebase every session, the AI starts with knowledge.

Hybrid extraction: tree-sitter native N-API for structure (symbols, imports, calls across 28 languages) + host LLM for semantics (what modules do, why they're built that way). Zero extra API keys.

Quick Start

# Install
npm install -g codecortex-ai

# Initialize knowledge for your project
cd /path/to/your-project
codecortex init

# Start MCP server (for AI agent access)
codecortex serve

# Check knowledge freshness
codecortex status

Connect to Claude Code

Add to your MCP config:

{
  "mcpServers": {
    "codecortex": {
      "command": "codecortex",
      "args": ["serve"],
      "cwd": "/path/to/your-project"
    }
  }
}

What Gets Generated

All knowledge lives in .codecortex/ as flat files in your repo:

.codecortex/
  cortex.yaml          # project manifest
  constitution.md      # project overview for agents
  overview.md          # module map + entry points
  graph.json           # dependency graph (imports, calls, modules)
  symbols.json         # full symbol index (functions, classes, types...)
  temporal.json        # git coupling, hotspots, bug history
  modules/*.md         # per-module deep analysis
  decisions/*.md       # architectural decision records
  sessions/*.md        # session change logs
  patterns.md          # coding patterns and conventions

Six Knowledge Layers

LayerWhatFile
1. StructuralModules, deps, symbols, entry pointsgraph.json + symbols.json
2. SemanticWhat each module does, data flow, gotchasmodules/*.md
3. TemporalGit behavioral fingerprint - coupling, hotspots, bug historytemporal.json
4. DecisionsWhy things are built this waydecisions/*.md
5. PatternsHow code is written herepatterns.md
6. SessionsWhat changed between sessionssessions/*.md

The Temporal Layer

This is the killer differentiator. The temporal layer tells agents "if you touch file X, you MUST also touch file Y" even when there's no import between them. This comes from git co-change analysis, not static code analysis.

Example from a real codebase:

  • routes.ts and worker.ts co-changed in 9/12 commits (75%) with zero imports between them
  • Without this knowledge, an AI editing one file would produce a bug 75% of the time

MCP Tools (14)

Read Tools (9)

ToolDescription
get_project_overviewConstitution + overview + graph summary
get_module_contextModule doc by name, includes temporal signals
get_session_briefingChanges since last session
search_knowledgeKeyword search across all knowledge
get_decision_historyDecision records filtered by topic
get_dependency_graphImport/export graph, filterable
lookup_symbolSymbol by name/file/kind
get_change_couplingWhat files must I also edit if I touch X?
get_hotspotsFiles ranked by risk (churn x coupling)

Write Tools (5)

ToolDescription
analyze_moduleReturns source files + structured prompt for LLM analysis
save_module_analysisPersists LLM analysis to modules/*.md
record_decisionSaves architectural decision to decisions/*.md
update_patternsMerges coding pattern into patterns.md
report_feedbackAgent reports incorrect knowledge for next analysis

CLI Commands

CommandDescription
codecortex initDiscover project + extract symbols + analyze git history
codecortex serveStart MCP server (stdio transport)
codecortex updateRe-extract changed files, update affected modules
codecortex statusShow knowledge freshness, stale modules, symbol counts

Token Efficiency

CodeCortex uses a three-tier memory model to minimize token usage:

Session start (HOT only):           ~4,300 tokens
Working on a module (+WARM):         ~5,000 tokens
Need coding patterns (+COLD):        ~5,900 tokens

vs. raw scan of entire codebase:    ~37,800 tokens

85-90% token reduction. 7-10x efficiency gain.

Supported Languages (28)

CategoryLanguages
WebTypeScript, TSX, JavaScript, Liquid
SystemsC, C++, Objective-C, Rust, Zig, Go
JVMJava, Kotlin, Scala
.NETC#
MobileSwift, Dart
ScriptingPython, Ruby, PHP, Lua, Bash, Elixir
FunctionalOCaml, Elm, Emacs Lisp
OtherSolidity, Vue, CodeQL

Tech Stack

  • TypeScript ESM, Node.js 20+
  • tree-sitter (native N-API) + 28 language grammar packages
  • @modelcontextprotocol/sdk - MCP server
  • commander - CLI
  • simple-git - git integration
  • yaml, zod, glob

License

MIT

Reviews

No reviews yet

Sign in to write a review