MCP Hub
Back to servers

codex-mcp-tool

Validated

MCP server bridging AI assistants to OpenAI Codex CLI for code analysis and review

RegistrynpmGitHub42/wk
Stars
20
Forks
6
Tools
13
Updated
Mar 6, 2026
Validated
Mar 8, 2026
Validation Details

Duration: 8.2s

Server: codex-cli-mcp v2.1.1

Quick Install

npx -y @trishchuk/codex-mcp-tool

Codex MCP Tool

GitHub Release npm version npm downloads License: MIT

MCP server connecting Claude/Cursor to Codex CLI. Enables code analysis via @ file references, multi-turn conversations, sandboxed edits, and structured change mode.

Features

  • File Analysis — Reference files with @src/, @package.json syntax
  • Multi-Turn Sessions — Conversation continuity with workspace isolation
  • Native Resume — Uses codex resume for context preservation (CLI v0.36.0+)
  • Local OSS Models — Run with Ollama or LM Studio via localProvider
  • Web Search — Research capabilities with search: true
  • Sandbox Mode — Safe code execution with --full-auto
  • Change Mode — Structured OLD/NEW patch output for refactoring
  • Brainstorming — SCAMPER, design-thinking, lateral thinking frameworks
  • Health Diagnostics — CLI version, features, and session monitoring
  • Cross-Platform — Windows, macOS, Linux fully supported

Quick Start

claude mcp add codex-cli -- npx -y @trishchuk/codex-mcp-tool

Prerequisites: Node.js 18+, Codex CLI installed and authenticated.

Configuration

{
  "mcpServers": {
    "codex-cli": {
      "command": "npx",
      "args": ["-y", "@trishchuk/codex-mcp-tool"]
    }
  }
}

Config locations: macOS: ~/Library/Application Support/Claude/claude_desktop_config.json | Windows: %APPDATA%\Claude\claude_desktop_config.json

Usage Examples

// File analysis
'explain the architecture of @src/';
'analyze @package.json and list dependencies';

// With specific model
'use codex with model gpt-5.4 to analyze @algorithm.py';

// Multi-turn conversations (v1.4.0+)
'ask codex sessionId:"my-project" prompt:"explain @src/"';
'ask codex sessionId:"my-project" prompt:"now add error handling"';

// Brainstorming
'brainstorm ways to optimize CI/CD using SCAMPER method';

// Sandbox mode
'use codex sandbox:true to create and run a Python script';

// Web search
'ask codex search:true prompt:"latest TypeScript 5.7 features"';

// Local OSS model (Ollama)
'ask codex localProvider:"ollama" model:"qwen3:8b" prompt:"explain @src/"';

Tools

ToolDescription
ask-codexExecute Codex CLI with file analysis, models, sessions
brainstormGenerate ideas with SCAMPER, design-thinking, etc.
list-sessionsView/delete/clear conversation sessions
healthDiagnose CLI installation, version, features
ping / helpTest connection, show CLI help

Models

Default: gpt-5.4 with fallback → gpt-5.3-codexgpt-5.2-codexgpt-5.1-codex-maxgpt-5.2

ModelUse Case
gpt-5.4Latest frontier agentic coding (default)
gpt-5.3-codexFrontier agentic coding
gpt-5.2-codexFrontier agentic coding
gpt-5.1-codex-maxDeep and fast reasoning
gpt-5.1-codex-miniCost-efficient quick tasks
gpt-5.2Broad knowledge, reasoning and coding

Key Features

Session Management (v1.4.0+)

Multi-turn conversations with workspace isolation:

{ "prompt": "analyze code", "sessionId": "my-session" }
{ "prompt": "continue from here", "sessionId": "my-session" }
{ "prompt": "start fresh", "sessionId": "my-session", "resetSession": true }

Environment:

  • CODEX_SESSION_TTL_MS - Session TTL (default: 24h)
  • CODEX_MAX_SESSIONS - Max sessions (default: 50)

Local OSS Models (v1.6.0+)

Run with local Ollama or LM Studio instead of OpenAI:

// Ollama
{ "prompt": "analyze @src/", "localProvider": "ollama", "model": "qwen3:8b" }

// LM Studio
{ "prompt": "analyze @src/", "localProvider": "lmstudio", "model": "my-model" }

// Auto-select provider
{ "prompt": "analyze @src/", "oss": true }

Requirements: Ollama running locally with a model that supports tool calling (e.g. qwen3:8b).

Advanced Options

ParameterDescription
modelModel selection
sessionIdEnable conversation continuity
sandboxEnable --full-auto mode
searchEnable web search
changeModeStructured OLD/NEW edits
addDirsAdditional writable directories
toolOutputTokenLimitCap response verbosity (100-10,000)
reasoningEffortReasoning depth: low, medium, high, xhigh
ossUse local OSS model provider
localProviderLocal provider: lmstudio or ollama

CLI Compatibility

VersionFeatures
v0.60.0+GPT-5.2 model family
v0.59.0+--add-dir, token limits
v0.52.0+Native --search flag
v0.36.0+Native codex resume (sessions)

Troubleshooting

codex --version    # Check CLI version
codex login        # Authenticate

Use health tool for diagnostics: 'use health verbose:true'

Migration

v2.0.x → v2.1.0: gpt-5.4 as new default model, updated fallback chain.

v1.5.x → v1.6.0: Local OSS model support (localProvider, oss), gpt-5.3-codex default model, xhigh reasoning effort.

v1.3.x → v1.4.0: New sessionId parameter, list-sessions/health tools, structured error handling. No breaking changes.

License

MIT License. Not affiliated with OpenAI.


Documentation | Issues | Inspired by jamubc/gemini-mcp-tool

Reviews

No reviews yet

Sign in to write a review