MCP Hub
Back to servers

prompyai

Scores your prompts against your real codebase — context-aware prompt intelligence

Registry
Updated
Mar 12, 2026

Quick Install

npx -y prompyai-mcp

PrompyAI

Context-aware prompt intelligence MCP server for Claude CLI. Scores developer prompts against their real codebase, suggests improvements, and rewrites enhanced prompts.

How It Works

When you write a prompt in Claude CLI, PrompyAI automatically evaluates it against your actual project — files, tech stack, conventions, session history — and returns a score with actionable suggestions.

Prompt Score: 28/100 [F]
Session context: 12 file references carried forward

  Specificity        6/25  ===...........
  Context            5/25  ==............
  Clarity           10/25  ======........
  Anchoring          7/25  ====..........

This prompt is too vague for good results. Critical fixes:
  1. Replace "fix" with what's actually broken
     > "debug the JWT validation error in @src/middleware/auth.ts"
  2. Describe expected vs actual behavior
     > "should return 200 but returns 401 when token has role claim"

Enhanced prompt:
```​
In @src/middleware/auth.ts, the JWT validation returns 401 for valid
tokens. Update validateToken to extract the role claim correctly.
Ensure existing vitest tests pass.
```​

Quick Start

# Zero-install (recommended)
claude mcp add prompyai -- npx prompyai-mcp serve

# Or install globally first
npm install -g prompyai-mcp
claude mcp add prompyai -- prompyai serve
{
  "mcpServers": {
    "prompyai": {
      "command": "npx",
      "args": ["prompyai-mcp", "serve"]
    }
  }
}

Requires Node.js 20+.

MCP Tools

evaluate_prompt

Automatically called on every user message. Scores your prompt and returns suggestions.

ParameterRequiredDescription
promptyesThe prompt text to evaluate
workspace_pathyesAbsolute path to your project
active_filenoCurrently open file path
session_idnoClaude Code session ID for multi-turn context

get_context

Returns a summary of your project: detected tech stack, recently modified files, key folders, and AI instruction summaries.

ParameterRequiredDescription
workspace_pathyesAbsolute path to your project

prompyai_toggle

Turns auto-evaluation on or off. Enabled by default.

ParameterRequiredDescription
enabledyestrue to enable, false to disable

Scoring Dimensions

Each dimension scores 0-25, total 0-100.

DimensionMeasures
SpecificityConcrete actions vs vague verbs, output format, constraints
ContextFile references, error messages, expected vs actual behavior
ClaritySingle focused task, success criteria, unambiguous language
AnchoringFile paths, project entity references, hot file mentions

Grades: A (90+), B (70+), C (50+), D (30+), F (<30)

Features

  • Auto-scoring — Evaluates every prompt automatically, no manual trigger needed
  • Session-aware — Reads Claude Code JSONL transcripts for multi-turn context
  • Multi-agent aware — Includes subagent research in context scoring
  • Monorepo support — Detects tech stacks across workspace packages
  • AI-powered suggestions — Claude Haiku generates context-aware improvements (with API key)
  • Template fallback — Heuristic-only mode works without an API key
  • Rate limiting — 100 AI calls/day per machine, heuristic fallback when exceeded
  • Anonymous telemetry — Usage stats only (hashed machine ID, no PII)
  • Toggle — Turn auto-evaluation on/off at any time

AI-Powered Suggestions

When ANTHROPIC_API_KEY is set, PrompyAI uses Claude Haiku to generate suggestions grounded in your project structure. Without an API key, it falls back to template-based suggestions.

export ANTHROPIC_API_KEY=sk-ant-...

Environment Variables

VariableRequiredDescription
ANTHROPIC_API_KEYNoEnables AI-powered suggestions via Claude Haiku
PROMPYAI_TELEMETRYNoSet to false to opt out of anonymous telemetry
PROMPYAI_TELEMETRY_URLNoOverride telemetry endpoint URL

CLI Commands

prompyai serve              # Start the MCP server (default)
prompyai doctor             # Run environment diagnostics
  --workspace <path>        # Workspace to check (default: cwd)

Rate Limits

PrompyAI includes built-in rate limiting to manage API costs:

  • Per machine: 100 AI-enhanced evaluations per day
  • Global: Monthly cost cap (~$500)
  • When limits hit: Scoring continues with heuristic-only mode (no AI suggestions)

Monorepo Structure

PrompyAI/
├── packages/
│   ├── mcp-server/     ← Core product (npm: prompyai-mcp)
│   ├── landing/        ← Future: prompyai.com
│   └── shared/         ← Future: shared types for IDE extensions
├── CLAUDE.md
├── ARCHITECTURE.md
└── README.md

Development

# Install dependencies
pnpm install

# Run tests (220 tests)
pnpm test

# Type check
pnpm typecheck

# Build
pnpm build

# Test with MCP Inspector
npx @modelcontextprotocol/inspector npx tsx packages/mcp-server/src/mcp/server.ts

License

MIT

Reviews

No reviews yet

Sign in to write a review