MCP Hub
Back to servers

claude-concilium

Multi-agent AI consultation framework for Claude Code via MCP — get second opinions from OpenAI, Gemini, Qwen, DeepSeek

Stars
3
Forks
1
Updated
Feb 25, 2026
Validated
Feb 26, 2026

Claude Concilium

License: MIT Node.js 18+ MCP Protocol Servers Smoke Tests

Multi-agent AI consultation framework for Claude Code via MCP.

Get a second (and third) opinion from other LLMs when Claude Code alone isn't enough.

Claude Code ──┬── OpenAI (Codex CLI) ──► Opinion A
              ├── Gemini (gemini-cli) ─► Opinion B
              │
              └── Synthesis ◄── Consensus or iterate

The Problem

Claude Code is powerful, but one brain can miss bugs, overlook edge cases, or get stuck in a local optimum. Critical decisions benefit from diverse perspectives.

The Solution

Concilium runs parallel consultations with multiple LLMs through standard MCP protocol. Each LLM server wraps a CLI tool — no API keys needed for the primary providers (they use OAuth).

Key features:

  • Parallel consultation with 2+ AI agents
  • Production-grade fallback chains with error detection
  • Each MCP server works standalone or as part of Concilium
  • Plug & play: clone, npm install, add to .mcp.json

Architecture

┌─────────────────────────────────────────────────────────┐
│                     Claude Code                          │
│                                                          │
│  "Review this code for race conditions"                  │
│                                                          │
│  ┌──────────────┐  ┌──────────────┐                      │
│  │  MCP Call #1  │  │  MCP Call #2  │   (parallel)        │
│  └──────┬───────┘  └──────┬───────┘                      │
│         │                  │                              │
└─────────┼──────────────────┼──────────────────────────────┘
          │                  │
          ▼                  ▼
   ┌──────────────┐   ┌──────────────┐
   │  mcp-openai  │   │  mcp-gemini  │     Primary agents
   │  (codex exec)│   │ (gemini -p)  │
   └──────┬───────┘   └──────┬───────┘
          │                  │
          ▼                  ▼
   ┌──────────────┐   ┌──────────────┐
   │   OpenAI     │   │   Google     │     LLM providers
   │   (OAuth)    │   │   (OAuth)    │
   └──────────────┘   └──────────────┘

   Fallback chain (on quota/error):
   OpenAI → Qwen → DeepSeek
   Gemini → Qwen → DeepSeek

Quickstart

1. Clone and install

git clone https://github.com/spyrae/claude-concilium.git
cd claude-concilium

# Install dependencies for each server
cd servers/mcp-openai && npm install && cd ../..
cd servers/mcp-gemini && npm install && cd ../..
cd servers/mcp-qwen && npm install && cd ../..

# Verify all servers work (no CLI tools required)
node test/smoke-test.mjs

Expected output:

PASS mcp-openai  (Tools: openai_chat, openai_review)
PASS mcp-gemini  (Tools: gemini_chat, gemini_analyze)
PASS mcp-qwen    (Tools: qwen_chat)
All tests passed.

2. Set up providers

Pick at least 2 providers:

ProviderAuthFree TierSetup
OpenAIcodex login (OAuth)ChatGPT Plus weekly creditsSetup guide
GeminiGoogle OAuth1000 req/daySetup guide
Qwenqwen login or API keyVariesSetup guide
DeepSeekAPI keyPay-per-use (cheap)Setup guide

3. Add to Claude Code

Copy config/mcp.json.example and update paths:

# Edit the example with your actual paths
cp config/mcp.json.example .mcp.json
# Update "/path/to/claude-concilium" with actual path

Or add servers individually to your existing .mcp.json:

{
  "mcpServers": {
    "mcp-openai": {
      "type": "stdio",
      "command": "node",
      "args": ["/absolute/path/to/servers/mcp-openai/server.js"],
      "env": {
        "CODEX_HOME": "~/.codex-minimal"
      }
    },
    "mcp-gemini": {
      "type": "stdio",
      "command": "node",
      "args": ["/absolute/path/to/servers/mcp-gemini/server.js"]
    }
  }
}

4. Install the skill (optional)

Copy the Concilium skill to your Claude Code commands:

cp skill/ai-concilium.md ~/.claude/commands/ai-concilium.md

Now use /ai-concilium in Claude Code to trigger a multi-agent consultation.

MCP Servers

Each server can be used independently — you don't need all of them.

ServerCLI ToolAuthTools
mcp-openaicodexOAuth (ChatGPT Plus)openai_chat, openai_review
mcp-geminigeminiGoogle OAuthgemini_chat, gemini_analyze
mcp-qwenqwenAPI key / CLI loginqwen_chat

DeepSeek uses the existing deepseek-mcp-server npm package — no custom server needed.

How It Works

Consultation Flow

  1. Formulate — describe the problem concisely (under 500 chars)
  2. Send in parallel — OpenAI + Gemini get the same prompt
  3. Handle errors — if a provider fails, fallback chain kicks in (Qwen → DeepSeek)
  4. Synthesize — compare responses, find consensus
  5. Iterate (optional) — resolve disagreements with follow-up questions
  6. Decide — apply the synthesized solution

Error Detection

All servers detect provider-specific errors and return structured responses:

Error TypeMeaningAction
QUOTA_EXCEEDEDRate/credit limit hitUse fallback provider
AUTH_EXPIRED / AUTH_REQUIREDToken needs refreshRe-authenticate CLI
MODEL_NOT_SUPPORTEDModel unavailable on planUse default model
TimeoutProcess hungAuto-killed, use fallback

Fallback Chain

Primary:   OpenAI ──────────────► Response
           (QUOTA_EXCEEDED?)
                    │
Fallback 1: Qwen ──┴────────────► Response
           (timeout?)
                    │
Fallback 2: DeepSeek ───────────► Response (always available)

When to Use Concilium

ScenarioRecommended Agents
Code reviewOpenAI + Gemini (parallel)
Architecture decisionOpenAI + Gemini → iterate if disagree
Stuck bug (3+ attempts)All available agents
Performance optimizationGemini (1M context) + OpenAI
Security reviewOpenAI + Gemini + manual verification

Docker

Run any server in a container:

# Build
docker build -t claude-concilium .

# Run a specific server (mcp-openai | mcp-gemini | mcp-qwen)
docker run -i --rm -e SERVER=mcp-openai claude-concilium
docker run -i --rm -e SERVER=mcp-gemini claude-concilium

Note: The servers wrap CLI tools (codex, gemini, qwen) that require local authentication. Mount your auth credentials when running:

# OpenAI (Codex)
docker run -i --rm -e SERVER=mcp-openai \
  -v ~/.codex:/root/.codex:ro \
  claude-concilium

# Gemini
docker run -i --rm -e SERVER=mcp-gemini \
  -v ~/.config/gemini:/root/.config/gemini:ro \
  claude-concilium

Customization

See docs/customization.md for:

  • Adding your own LLM provider
  • Modifying the fallback chain
  • MCP server template
  • Custom prompt strategies

Documentation

License

MIT

Reviews

No reviews yet

Sign in to write a review