MCP Hub
Back to servers

CC-Meta (Prompt Evaluator)

CC-Meta is an MCP server that evaluates prompt quality using OpenAI or Anthropic models, providing numerical scores and actionable feedback to help users refine their LLM interactions.

Stars
5
Forks
1
Tools
2
Updated
Jul 17, 2025
Validated
Jan 11, 2026

CC-Meta (Claude Code Metaprompter)

CC-Meta lets you iterate on your Claude Code prompts without leaving the terminal. Instead of switching to the web client to test and refine prompts, you get instant AI feedback on clarity, specificity, and completeness right in your current workflow. This keeps you in context and speeds up the process of crafting effective prompts.

An MCP (Model Context Protocol) server that evaluates prompts using AI to provide detailed feedback on clarity, completeness, and effectiveness.

Before & After: Asking "Build a calculator app"

Without CC-MetaWith CC-Meta
Without CC-Meta - vague promptWith CC-Meta - detailed feedback

Features

  • Multi-model support - Use any OpenAI or Anthropic model
  • Flexible API keys - Provide your own API key for each evaluation
  • Two tools available:
    • ping - Test if the server is connected and working
    • evaluate - Get AI-powered analysis of your prompts

Setup

  1. Install dependencies:

    npm install
    # or
    yarn install
    
  2. Build the project:

    npm run build
    
  3. Configure your model and API key: Edit the .mcp.json file to set your preferred model and API key:

    {
      "mcpServers": {
        "prompt-evaluator": {
          "command": "node",
          "args": ["./prompt-evaluator-mcp/start.js"],
          "env": {
            "PROMPT_EVAL_MODEL": "sonnet-4",  // or "o3", "opus-4"
            "PROMPT_EVAL_API_KEY": "your-api-key-here"
          }
        }
      }
    }
    

Usage

Once configured, you have multiple ways to evaluate prompts:

Quick Slash Command (Recommended)

/meta Your prompt here without quotes

Direct MCP Function Calls

mcp_prompt-evaluator_ping()  # Test connection
mcp_prompt-evaluator_evaluate("Your prompt to evaluate")

Supported Models

  • OpenAI: o3 (o3-2025-04-16)
  • Anthropic: opus-4 (claude-opus-4-20250514), sonnet-4 (claude-sonnet-4-20250514)

The AI evaluation provides:

  • Score from 0-10
  • Specific strengths of your prompt
  • Areas for improvement
  • Suggested rewrites when needed
  • Analysis of:
    • Clarity of intent
    • Specificity of requirements
    • Context provided
    • Actionability
    • Edge cases considered

Customization

The evaluation prompt is stored in src/prompt.ts and can be easily customized:

  • Edit the prompt template to change evaluation criteria
  • Modify the scoring rubric and weights
  • Adjust the output format
  • Add domain-specific evaluation rules

After making changes, rebuild with npm run build.

Reviews

No reviews yet

Sign in to write a review