MCP-Maestro
Every great research operation needs a conductor.
This MCP server turns Maestro into a tool your AI assistant can actually direct. Think of it as the bridge between "write me a report" and having a whole research orchestra play in harmony.
What is Maestro?
Maestro is an AI research framework with serious infrastructure. While others send one agent to do one thing, Maestro coordinates multiple specialized agents — planning, research, writing, reflection — all working together to produce properly structured, multi-section research output.
- Source: github.com/Dianachong/maestro
- Agent count: 5+ specialized agents
- Secret sauce: Agentic layer with planning, reflection, and writing passes
The backend runs an agentic layer on top of multiple LLM calls, manages research cycles, and maintains a proper document pipeline with embeddings and reranking. It's serious research infrastructure.
What does this MCP server do?
It exposes Maestro's mission management system through MCP. You can:
- Fire off missions and let Maestro's agents do the heavy lifting
- Track progress in real-time as sections get researched
- Pause, resume, or stop research mid-flight
- Pull reports once the orchestra finishes playing
The Full Suite of Tools
| Tool | What it does |
|---|---|
create_mission | Launch a new research mission |
get_report | Pull the research report when done |
get_notes | Get all research notes collected |
resume | Continue a paused mission |
stop | Cancel a running mission |
Getting Started
Prerequisites
- Docker and Docker Compose
- Python 3.10+
1. Get Maestro Conducting
# Docker compose is the easiest path
git clone https://github.com/Dianachong/maestro.git
cd maestro/docker
docker compose up
This spins up the backend API plus PostgreSQL with pgvector for embeddings.
For more complex setups, check the official deployment docs.
2. Set Up This Server
git clone https://github.com/Dianachong/mcp-maestro.git
cd mcp-maestro
python -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
3. Point It At Maestro
cp .env.example .env
Set MAESTRO_BASE_URL to your Maestro API endpoint. Default port is 10303.
🔧 Configuration for AI Assistants
This section shows how to configure various AI code assistants to use the Maestro MCP server.
Claude Code (Anthropic)
Claude Code is Anthropic's official CLI tool for interacting with Claude models locally.
Configuration file: ~/.claude.json
{
"mcpServers": {
"maestro": {
"command": "python",
"args": ["/ABSOLUTE/PATH/TO/mcp-maestro/server.py"],
"env": {
"MAESTRO_BASE_URL": "http://localhost:10303"
}
}
}
}
Note: Ensure the path to server.py is absolute, not relative.
OpenCode (OpenCode CLI)
OpenCode is a modern AI code assistant CLI with MCP support.
Configuration file: ~/.config/opencode/config.json (or set OPENCODE_CONFIG_PATH)
{
"mcp": {
"servers": {
"maestro": {
"command": "python",
"args": ["/ABSOLUTE/PATH/TO/mcp-maestro/server.py"],
"env": {
"MAESTRO_BASE_URL": "http://localhost:10303"
}
}
}
}
}
Qwen Code (Alibaba/Qwen)
Qwen Code is Alibaba's AI coding assistant based on the Qwen model series.
Configuration file: ~/.config/qwen-code/mcp.json
{
"mcpServers": {
"maestro": {
"command": "python",
"args": ["/ABSOLUTE/PATH/TO/mcp-maestro/server.py"],
"env": {
"MAESTRO_BASE_URL": "http://localhost:10303"
}
}
}
}
Cursor (cursor.com)
Cursor is an AI-first code editor built on VS Code with deep MCP integration.
Configuration file: ~/.cursor/mcp.json
{
"mcpServers": {
"maestro": {
"command": "python",
"args": ["/ABSOLUTE/PATH/TO/mcp-maestro/server.py"],
"env": {
"MAESTRO_BASE_URL": "http://localhost:10303"
}
}
}
}
Alternative: Open Cursor → Settings → MCP → Add new server
Windsurf (Codeium)
Windsurf is Codeium's AI code assistant with agentic capabilities.
Configuration file: ~/.codeium/windsurf/mcp_config.json
{
"mcpServers": {
"maestro": {
"command": "python",
"args": ["/ABSOLUTE/PATH/TO/mcp-maestro/server.py"],
"env": {
"MAESTRO_BASE_URL": "http://localhost:10303"
}
}
}
}
Note: Some Windsurf versions also support MCP servers via the settings UI.
GitHub Copilot (VS Code Extension)
GitHub Copilot can use MCP servers through VS Code's MCP extension support.
Configuration: Install the "MCP" extension for VS Code, then add to .vscode/mcp.json in your workspace:
{
"servers": {
"maestro": {
"command": "python",
"args": ["/ABSOLUTE/PATH/TO/mcp-maestro/server.py"],
"env": {
"MAESTRO_BASE_URL": "http://localhost:10303"
}
}
}
}
Quick Reference
| Assistant | Config Location | Config Format |
|---|---|---|
| Claude Code | ~/.claude.json | JSON with mcpServers |
| OpenCode | ~/.config/opencode/config.json | JSON with mcp.servers |
| Qwen Code | ~/.config/qwen-code/mcp.json | JSON with mcpServers |
| Cursor | ~/.cursor/mcp.json | JSON with mcpServers |
| Windsurf | ~/.codeium/windsurf/mcp_config.json | JSON with mcpServers |
| Copilot (VS Code) | .vscode/mcp.json | JSON with servers |
How It Works
The Mission Lifecycle
create_mission → running → [pause] → completed
↘ [stop] → cancelled
↘ [resume] → running
- Create with your research request
- Track status as agents do their thing
- Pull the report when it completes
Example Flow
You: Create a research mission about advances in solid-state batteries
CLI: Mission created with ID: mission-abc123
You: Check status of mission abc123
CLI: Status: running, Section 2/5 complete
You: Get research notes for mission abc123
CLI: [Array of research notes from agents]
You: Get report for mission abc123
CLI: [Full multi-section research report]
What's Inside a Mission
- Planning Agent: Breaks down the research into sections
- Research Agents: Hunt for information on each section
- Writing Agent: Synthesizes findings into prose
- Reflection Agent: Reviews and suggests improvements
- Note Assignment: Tracks all sources and findings
Environment Variables
| Variable | Default | Description |
|---|---|---|
MAESTRO_BASE_URL | (required) | Where Maestro's API lives |
LOG_LEVEL | INFO | DEBUG for noisy logs |
Troubleshooting
Mission won't start
- Is Maestro's API responding? Check
MAESTRO_BASE_URLin.env - Check Maestro's logs for what went wrong
Mission stuck
- Use
stopto cancel, thencreate_missionwith refined query
Connection refused
- Firewall? Port conflict? Docker not running?
- Try
docker psto confirm Maestro is up
Timeout errors
- Research missions can take several minutes
- Use
get_mission_statusto monitor progress - Consider using
quickdepth for faster results
Contributing
Issues welcome. If you find a bug, include the mission ID if applicable.
License
MIT