MCP Hub
Back to servers

prompt-tuner-mcp-server

A comprehensive MCP server designed to refine, analyze, and optimize LLM prompts using multi-provider support (OpenAI, Anthropic, Google Gemini) and automated technique application.

Tools
4
Updated
Jan 3, 2026
Validated
Jan 9, 2026

PromptTuner MCP

PromptTuner MCP Logo

npm version License Node.js Version

PromptTuner MCP is an MCP server that refines, analyzes, optimizes, and validates prompts using OpenAI, Anthropic, or Google Gemini.

What it does

  1. Validates and trims input prompts (enforces MAX_PROMPT_LENGTH).
  2. Resolves the target format (auto uses heuristics; falls back to gpt if no format is detected).
  3. Calls the selected provider with retry and timeout controls.
  4. Validates and normalizes LLM output, falling back to stricter prompts or the basic technique when needed.
  5. Returns human-readable text plus machine-friendly structuredContent (and resource blocks for refined/optimized outputs).

Features

  • Refine prompts with single techniques or full multi-technique optimization.
  • Analyze quality with scores, characteristics, and actionable suggestions.
  • Validate prompts for issues, token limits, and injection risks.
  • Auto-detect target format (Claude XML, GPT Markdown, or JSON).
  • Structured outputs with provider/model metadata, fallback indicators, and score normalization.
  • Retry logic with exponential backoff for transient provider failures.
  • Emits MCP progress notifications for analyze_prompt when a progress token is provided.

Quick Start

PromptTuner runs over stdio only. The dev:http and start:http scripts are compatibility aliases (no HTTP transport yet).

Claude Desktop

Add to claude_desktop_config.json:

{
  "mcpServers": {
    "prompttuner": {
      "command": "npx",
      "args": ["-y", "@j0hanz/prompt-tuner-mcp-server@latest"],
      "env": {
        "LLM_PROVIDER": "openai",
        "OPENAI_API_KEY": "sk-..."
      }
    }
  }
}

Replace the API key and provider with your preferred LLM. Only configure the key for the active provider.

Configuration (Essentials)

PromptTuner reads configuration from environment variables. CLI flags can override them (run prompt-tuner-mcp-server --help). Full reference in CONFIGURATION.md.

VariableDefaultDescription
LLM_PROVIDERopenaiopenai, anthropic, or google.
OPENAI_API_KEY-Required when LLM_PROVIDER=openai.
ANTHROPIC_API_KEY-Required when LLM_PROVIDER=anthropic.
GOOGLE_API_KEY-Required when LLM_PROVIDER=google.
LLM_MODELprovider defaultOverride the default model.
LLM_TIMEOUT_MS60000Per-request timeout in milliseconds.
LLM_MAX_TOKENS8000Upper bound for LLM outputs (tool caps apply).
MAX_PROMPT_LENGTH10000Max trimmed prompt length (chars).

CLI Options

CLI flags override environment variables (run prompt-tuner-mcp-server --help).

| Flag | Env var | Description | | ------------------------------ | ----------------------- | ------------------------------------------- | --------------------------------------- | | --log-format <text | json> | LOG_FORMAT | Override log format (currently unused). | | --debug / --no-debug | DEBUG | Enable/disable debug logging. | | --include-error-context | INCLUDE_ERROR_CONTEXT | Include sanitized prompt snippet in errors. | | --llm-provider <provider> | LLM_PROVIDER | openai, anthropic, or google. | | --llm-model <name> | LLM_MODEL | Override the default model. | | --llm-timeout-ms <number> | LLM_TIMEOUT_MS | Override request timeout (ms). | | --llm-max-tokens <number> | LLM_MAX_TOKENS | Override output token cap. | | --max-prompt-length <number> | MAX_PROMPT_LENGTH | Override max prompt length (chars). |

Tools

All tools accept plain text, Markdown, or XML prompts. Responses include content (human-readable) and structuredContent (machine-readable).

refine_prompt

Fix grammar, improve clarity, and apply a single technique.

ParameterTypeRequiredDefaultNotes
promptstringYes-Trimmed and length-checked.
techniquestringNobasicbasic, chainOfThought, fewShot, roleBased, structured, comprehensive.
targetFormatstringNoautoauto, claude, gpt, json. auto uses heuristics.

Returns (structuredContent): ok, original, refined, corrections, technique, targetFormat, usedFallback, provider, model.

analyze_prompt

Score prompt quality (0-100) and provide suggestions.

ParameterTypeRequiredDefaultNotes
promptstringYes-Trimmed and length-checked.

Returns: ok, hasTypos, isVague, missingContext, suggestions, score, characteristics, usedFallback, scoreAdjusted, overallSource, provider, model.

optimize_prompt

Apply multiple techniques sequentially and return before/after scores.

ParameterTypeRequiredDefaultNotes
promptstringYes-Trimmed and length-checked.
techniquesstring[]No["basic"]1-6 techniques; order preserved. comprehensive expands to basic -> roleBased -> structured -> fewShot -> chainOfThought.
targetFormatstringNoautoauto, claude, gpt, json.

Returns: ok, original, optimized, techniquesApplied, targetFormat, beforeScore, afterScore, scoreDelta, improvements, usedFallback, scoreAdjusted, overallSource, provider, model.

validate_prompt

Pre-flight validation: issues, token estimate, and injection risks.

ParameterTypeRequiredDefaultNotes
promptstringYes-Trimmed and length-checked.
targetModelstringNogenericclaude, gpt, gemini, generic (token limits below).
checkInjectionbooleanNotrueWhen true, security risks are flagged as errors.

Returns: ok, isValid, issues, tokenEstimate, tokenLimit, tokenUtilization, overLimit, targetModel, securityFlags, provider, model.

Token limits used for validate_prompt: claude 200000, gpt 128000, gemini 1000000, generic 8000.

Response Format

  • content: array of content blocks (human-readable Markdown text plus optional resources).
  • structuredContent: machine-parseable results.
  • Errors return structuredContent.ok=false and an error object with code, message, optional context (sanitized, up to 200 chars), details, and recoveryHint.
  • Error responses also include isError: true.
  • refine_prompt and optimize_prompt include a resource content block with a file:/// URI and the prompt text in resource.text (Markdown).

Prompts

NameDescription
quick-optimizeFast prompt improvement with grammar and clarity fixes.
deep-optimizeComprehensive optimization using the comprehensive technique.
analyzeScore prompt quality and return suggestions.

All prompts accept a single argument: { "prompt": "..." }.

Prompt Optimization Workflow

Technique selection guide

Task typeRecommended techniques
Simple cleanupbasic
Code tasksroleBased + structured
Complex reasoningroleBased + chainOfThought
Data extractionstructured + fewShot
Maximum qualitycomprehensive

Recommended prompt architecture

  1. Role or identity
  2. Context or background
  3. Task or objective
  4. Steps or instructions
  5. Requirements or constraints (ALWAYS or NEVER)
  6. Output format
  7. Examples (if helpful)
  8. Final reminder

Development

Prerequisites

  • Node.js >= 22.0.0
  • npm

Scripts

CommandDescription
npm run buildCompile TypeScript and set permissions.
npm run prepareBuild on install (publishing helper).
npm run devRun from source in watch mode.
npm run dev:httpAlias of npm run dev (no HTTP transport yet).
npm run watchTypeScript compiler in watch mode.
npm run startRun the compiled server from dist/.
npm run start:httpAlias of npm run start (no HTTP transport yet).
npm run testRun node:test once.
npm run test:coverageRun node:test with experimental coverage.
npm run test:watchRun node:test in watch mode.
npm run lintRun ESLint.
npm run formatRun Prettier.
npm run type-checkTypeScript type checking.
npm run inspectorRun MCP Inspector against dist/index.js.
npm run inspector:httpAlias of npm run inspector (no HTTP transport yet).
npm run duplicationRun jscpd duplication report.

Project Structure

src/
  index.ts        Entry point
  server.ts       MCP server setup (stdio transport)
  config/         Configuration and constants
  lib/            Shared utilities (LLM, retry, validation)
  tools/          Tool implementations
  prompts/        MCP prompt templates
  schemas/        Zod input/output schemas

tests/            node:test suites

dist/             Compiled output (generated)

docs/             Static assets

Security

  • API keys are supplied only via environment variables.
  • Inputs are validated with Zod and additional length checks.
  • Error context is sanitized and truncated when INCLUDE_ERROR_CONTEXT=true.
  • Google safety filters can be disabled only via GOOGLE_SAFETY_DISABLED=true.

Contributing

Pull requests are welcome. Please include a short summary, tests run, and note any configuration changes.

License

MIT License. See LICENSE for details.

Reviews

No reviews yet

Sign in to write a review