chatgpt-interlocutor
An MCP server that gives Claude Code a ChatGPT second opinion. Cross-model adversarial review without leaving your terminal.
Why
When you're working in Claude Code, sometimes you want a second perspective — a different model's take on the same problem. Not because one is better, but because disagreement surfaces blind spots.
This server adds two tools to every Claude Code session:
ask_chatgpt— Send a question to ChatGPT and get its response inline. Second opinions, alternative approaches, sanity checks.compare_approaches— Send the same prompt to both models and get a structured comparison. Specificity, depth, accuracy, voice.
Install
Prerequisites
You need an OpenAI API key.
Add to Claude Code
Add this to your ~/.claude.json under mcpServers:
{
"mcpServers": {
"chatgpt-interlocutor": {
"command": "npx",
"args": ["-y", "chatgpt-interlocutor"],
"env": {
"OPENAI_API_KEY": "sk-your-key-here"
}
}
}
}
Restart Claude Code. You should see ask_chatgpt and compare_approaches in your available tools.
Or install globally
npm install -g chatgpt-interlocutor
Then configure with the direct path:
{
"mcpServers": {
"chatgpt-interlocutor": {
"command": "chatgpt-interlocutor",
"env": {
"OPENAI_API_KEY": "sk-your-key-here"
}
}
}
}
Tools
ask_chatgpt
Get a second opinion from ChatGPT.
| Parameter | Type | Default | Description |
|---|---|---|---|
prompt | string | required | The question or task to send |
model | string | gpt-4o-mini | gpt-4o-mini, gpt-4o, or o3-mini |
system | string | — | Optional system prompt for context |
temperature | number | 0.7 | Creativity 0–2. Lower = more focused |
compare_approaches
Benchmark Claude's output against ChatGPT's on the same task.
| Parameter | Type | Default | Description |
|---|---|---|---|
task_description | string | required | What both models are being asked to do |
prompt | string | required | The prompt to send to ChatGPT |
your_output | string | — | Claude's output for side-by-side comparison |
model | string | gpt-4o-mini | Which ChatGPT model to use |
When your_output is provided, the response includes a comparison framework:
- Specificity — Which is more concrete and actionable?
- Depth — Which goes deeper into the problem?
- Voice — Which sounds more human / less generic AI?
- Structure — Which is better organized?
- Accuracy — Which is more correct?
- Differentiation — Where do they diverge most?
Models
| Model | Cost | Best for |
|---|---|---|
gpt-4o-mini | Cheapest | Quick second opinions, brainstorming |
gpt-4o | Mid | Deeper analysis, complex reasoning |
o3-mini | Mid | Math, logic, structured reasoning |
How it works
This is a Model Context Protocol server that runs over stdio. Claude Code launches it as a subprocess, sends tool calls, and gets responses back. Your prompts go to the OpenAI API using your own key — nothing is stored or proxied.
License
MIT