MCP Hub
Back to servers

Judges Panel

Validated

18 specialized judges that evaluate AI-generated code for security, cost, and quality.

Stars
3
Tools
7
Updated
Feb 19, 2026
Validated
Feb 21, 2026
Validation Details

Duration: 9.7s

Server: judges v1.8.0

Quick Install

npx -y @kevinrabun/judges

Judges Panel

An MCP (Model Context Protocol) server that provides a panel of 18 specialized judges to evaluate AI-generated code — acting as an independent quality gate regardless of which project is being reviewed.

CI npm License: MIT


Quick Start

1. Install and Build

git clone https://github.com/KevinRabun/judges.git
cd judges
npm install
npm run build

2. Try the Demo

Run the included demo to see all 18 judges evaluate a purposely flawed API server:

npm run demo

This evaluates examples/sample-vulnerable-api.ts — a file intentionally packed with security holes, performance anti-patterns, and code quality issues — and prints a full verdict with per-judge scores and findings.

What you'll see:

╔══════════════════════════════════════════════════════════════╗
║           Judges Panel — Full Tribunal Demo                 ║
╚══════════════════════════════════════════════════════════════╝

  Overall Verdict : FAIL
  Overall Score   : 43/100
  Critical Issues : 15
  High Issues     : 17
  Total Findings  : 83
  Judges Run      : 18

  Per-Judge Breakdown:
  ────────────────────────────────────────────────────────────────
  ❌ Judge Data Security              0/100    7 finding(s)
  ❌ Judge Cybersecurity              0/100    7 finding(s)
  ❌ Judge Cost Effectiveness        52/100    5 finding(s)
  ⚠️  Judge Scalability              65/100    4 finding(s)
  ❌ Judge Cloud Readiness           61/100    4 finding(s)
  ❌ Judge Software Practices        45/100    6 finding(s)
  ❌ Judge Accessibility              0/100    8 finding(s)
  ❌ Judge API Design                 0/100    9 finding(s)
  ❌ Judge Reliability               54/100    3 finding(s)
  ❌ Judge Observability             45/100    5 finding(s)
  ❌ Judge Performance               27/100    5 finding(s)
  ❌ Judge Compliance                 0/100    4 finding(s)
  ⚠️  Judge Testing                  90/100    1 finding(s)
  ⚠️  Judge Documentation            70/100    4 finding(s)
  ⚠️  Judge Internationalization     65/100    4 finding(s)
  ⚠️  Judge Dependency Health        90/100    1 finding(s)
  ❌ Judge Concurrency               44/100    4 finding(s)
  ❌ Judge Ethics & Bias             65/100    2 finding(s)

3. Run the Tests

npm test

Runs 184 automated tests covering all 18 judges, markdown formatters, and edge cases.

4. Connect to Your Editor

Add the Judges Panel as an MCP server so your AI coding assistant can use it automatically.

VS Code — create .vscode/mcp.json in your project:

{
  "servers": {
    "judges": {
      "command": "node",
      "args": ["/absolute/path/to/judges/dist/index.js"]
    }
  }
}

Claude Desktop — add to claude_desktop_config.json:

{
  "mcpServers": {
    "judges": {
      "command": "node",
      "args": ["/absolute/path/to/judges/dist/index.js"]
    }
  }
}

Or install from npm instead of cloning:

npm install -g @kevinrabun/judges

Then use judges as the command in your MCP config (no args needed).


The Judge Panel

JudgeDomainRule PrefixWhat It Evaluates
Data SecurityData Security & PrivacyDATA-Encryption, PII handling, secrets management, access controls
CybersecurityCybersecurity & Threat DefenseCYBER-Injection attacks, XSS, CSRF, auth flaws, OWASP Top 10
Cost EffectivenessCost OptimizationCOST-Algorithm efficiency, N+1 queries, memory waste, caching strategy
ScalabilityScalability & PerformanceSCALE-Statelessness, horizontal scaling, concurrency, bottlenecks
Cloud ReadinessCloud-Native & DevOpsCLOUD-12-Factor compliance, containerization, graceful shutdown, IaC
Software PracticesEngineering Best PracticesSWDEV-SOLID principles, type safety, error handling, input validation
AccessibilityAccessibility (a11y)A11Y-WCAG compliance, screen reader support, keyboard navigation, ARIA
API DesignAPI Design & ContractsAPI-REST conventions, versioning, pagination, error responses
ReliabilityReliability & ResilienceREL-Error handling, timeouts, retries, circuit breakers
ObservabilityObservability & MonitoringOBS-Structured logging, health checks, metrics, tracing
PerformancePerformance & EfficiencyPERF-N+1 queries, sync I/O, caching, memory leaks
ComplianceRegulatory ComplianceCOMP-GDPR/CCPA, PII protection, consent, data retention, audit trails
TestingTesting & Quality AssuranceTEST-Test coverage, assertions, test isolation, naming
DocumentationDocumentation & ReadabilityDOC-JSDoc/docstrings, magic numbers, TODOs, code comments
InternationalizationInternationalization (i18n)I18N-Hardcoded strings, locale handling, currency formatting
Dependency HealthDependency ManagementDEPS-Version pinning, deprecated packages, supply chain
ConcurrencyConcurrency & Async SafetyCONC-Race conditions, unbounded parallelism, missing await
Ethics & BiasEthics & BiasETHICS-Demographic logic, dark patterns, inclusive language

How It Works

The tribunal operates in two modes:

  1. Pattern-Based Analysis (Tools) — The evaluate_code and evaluate_code_single_judge tools perform heuristic analysis using pattern matching to catch common anti-patterns. This works entirely offline with zero external API calls.

  2. LLM-Powered Deep Analysis (Prompts) — The server exposes MCP prompts (e.g., judge-data-security, full-tribunal) that provide each judge's expert persona as a system prompt. When used by an LLM-based client, this enables deeper, context-aware analysis beyond what pattern matching can detect.


Composable by Design

Judges Panel is intentionally focused on heuristic pattern detection — fast, offline, zero-dependency. It does not try to be an AST parser, a CVE scanner, or a linter. Those capabilities belong in dedicated MCP servers that an AI agent can orchestrate alongside Judges.

Recommended MCP Stack

When your AI coding assistant connects to multiple MCP servers, each one contributes its specialty:

┌─────────────────────────────────────────────────────────┐
│                   AI Coding Assistant                   │
│              (Claude, Copilot, Cursor, etc.)            │
└──────┬──────────┬──────────┬──────────┬────────────────┘
       │          │          │          │
       ▼          ▼          ▼          ▼
  ┌─────────┐ ┌────────┐ ┌────────┐ ┌────────┐
  │ Judges  │ │  AST   │ │  CVE / │ │ Linter │
  │  Panel  │ │ Server │ │  SBOM  │ │ Server │
  └─────────┘ └────────┘ └────────┘ └────────┘
   Heuristic   Structural  Vuln DB    Style &
   patterns    analysis    scanning   correctness
LayerWhat It DoesExample Servers
Judges Panel18-judge quality gate — security patterns, cost, scalability, a11y, compliance, ethicsThis server
AST AnalysisDeep structural analysis — data flow, complexity metrics, dead code, type trackingTree-sitter, Semgrep, SonarQube MCP servers
CVE / SBOMVulnerability scanning against live databases — known CVEs, license risks, supply chainOSV, Snyk, Trivy, Grype MCP servers
LintingLanguage-specific style and correctness rulesESLint, Ruff, Clippy MCP servers
Runtime ProfilingMemory, CPU, latency measurement on running codeCustom profiling MCP servers

Why Orchestration Beats a Monolith

MonolithOrchestrated MCP Stack
MaintenanceOne team owns everythingEach server evolves independently
DepthShallow coverage of many domainsDeep expertise per server
UpdatesCVE data stale = full redeployCVE server updates on its own
Language supportMust embed parsers for every languageAST server handles this
User choiceAll or nothingPick the servers you need
Offline capabilityHard to achieve with CVE depsJudges runs fully offline; CVE server handles network

What This Means in Practice

When you ask your AI assistant "Is this code production-ready?", the agent can:

  1. Judges Panel → Scan for hardcoded secrets, missing error handling, N+1 queries, accessibility gaps, compliance issues
  2. AST Server → Analyze cyclomatic complexity, detect unreachable code, trace tainted data flows
  3. CVE Server → Check every dependency in package.json against known vulnerabilities
  4. Linter Server → Enforce team style rules, catch language-specific gotchas

Each server returns structured findings. The AI synthesizes everything into a single, actionable review — no single server needs to do it all.


MCP Tools

get_judges

List all available judges with their domains and descriptions.

evaluate_code

Submit code to the full judges panel. All 18 judges evaluate independently and return a combined verdict.

ParameterTypeRequiredDescription
codestringyesThe source code to evaluate
languagestringyesProgramming language (e.g., typescript, python)
contextstringnoAdditional context about the code

evaluate_code_single_judge

Submit code to a specific judge for targeted review.

ParameterTypeRequiredDescription
codestringyesThe source code to evaluate
languagestringyesProgramming language
judgeIdstringyesSee judge IDs below
contextstringnoAdditional context

Judge IDs

data-security · cybersecurity · cost-effectiveness · scalability · cloud-readiness · software-practices · accessibility · api-design · reliability · observability · performance · compliance · testing · documentation · internationalization · dependency-health · concurrency · ethics-bias


MCP Prompts

Each judge has a corresponding prompt for LLM-powered deep analysis:

PromptDescription
judge-data-securityDeep data security review
judge-cybersecurityDeep cybersecurity review
judge-cost-effectivenessDeep cost optimization review
judge-scalabilityDeep scalability review
judge-cloud-readinessDeep cloud readiness review
judge-software-practicesDeep software practices review
judge-accessibilityDeep accessibility/WCAG review
judge-api-designDeep API design review
judge-reliabilityDeep reliability & resilience review
judge-observabilityDeep observability & monitoring review
judge-performanceDeep performance optimization review
judge-complianceDeep regulatory compliance review
judge-testingDeep testing quality review
judge-documentationDeep documentation quality review
judge-internationalizationDeep i18n review
judge-dependency-healthDeep dependency health review
judge-concurrencyDeep concurrency & async safety review
judge-ethics-biasDeep ethics & bias review
full-tribunalAll 18 judges in a single prompt

Scoring

Each judge scores the code from 0 to 100:

SeverityScore Deduction
Critical−30 points
High−18 points
Medium−10 points
Low−5 points
Info−2 points

Verdict logic:

  • FAIL — Any critical finding, or score < 60
  • WARNING — Any high finding, any medium finding, or score < 80
  • PASS — Score ≥ 80 with no critical, high, or medium findings

The overall tribunal score is the average of all 18 judges. The overall verdict fails if any judge fails.


Project Structure

judges/
├── src/
│   ├── index.ts              # MCP server entry point — tools, prompts, transport
│   ├── types.ts              # TypeScript interfaces (Finding, JudgeEvaluation, etc.)
│   ├── evaluators/           # Pattern-based analysis engine for each judge
│   │   ├── index.ts          # evaluateWithJudge(), evaluateWithTribunal()
│   │   ├── shared.ts         # Scoring, verdict logic, markdown formatters
│   │   └── *.ts              # One analyzer per judge (18 files)
│   └── judges/               # Judge definitions (id, name, domain, system prompt)
│       ├── index.ts          # JUDGES array, getJudge(), getJudgeSummaries()
│       └── *.ts              # One definition per judge (18 files)
├── examples/
│   ├── sample-vulnerable-api.ts  # Intentionally flawed code (triggers all 18 judges)
│   └── demo.ts                   # Run: npm run demo
├── tests/
│   └── judges.test.ts            # Run: npm test (184 tests)
├── server.json               # MCP Registry manifest
├── package.json
├── tsconfig.json
└── README.md

Scripts

CommandDescription
npm run buildCompile TypeScript to dist/
npm run devWatch mode — recompile on save
npm testRun the full test suite (184 tests)
npm run demoRun the sample tribunal demo
npm startStart the MCP server
npm run cleanRemove dist/

License

MIT

Reviews

No reviews yet

Sign in to write a review