π€ Fullstack MCP Playground
Build end-to-end fullstack AI applications with the Model Context Protocol (MCP) and Claude as the LLM engine.
A production-ready template featuring microservices architecture for MCP Servers, where each server exposes specialized tools that AI agents can use. The frontend acts as the MCP Host, orchestrating multiple servers and providing a chat interface powered by Claude.
FROM LOCALHOST TO PRODUCTION β BUILT LIKE A HACKER
π§ Overview
This is a fullstack template for building AI-powered applications using the Model Context Protocol (MCP). It demonstrates how to:
- Create multiple MCP Servers as microservices (database, files, custom tools)
- Build an MCP Host (frontend) that connects to multiple servers
- Integrate Claude AI to consume tools from all connected servers
- Scale horizontally by adding new MCP servers without touching existing code
Architecture
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β FRONTEND (Next.js) = MCP HOST β
β β’ UI (Chat, Server Management) β
β β’ MCP Orchestrator (connects to multiple servers) β
β β’ Claude Client (consumes tools from all servers) β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β β β
[HTTPS/SSE] [HTTPS/SSE] [HTTPS/SSE]
β β β
ββββββββββββββββ ββββββββββββββββ ββββββββββββββββ
β MCP Server β β MCP Server β β MCP Server β
β CORE β β DATABASE β β FILES β
β (health, β β (query, β β (read, β
β metrics, β β insert, β β write, β
β config) β β schema) β β list) β
ββββββββββββββββ ββββββββββββββββ ββββββββββββββββ
π Quick Start
1. Prerequisites
- Docker & Docker Compose
- Node.js 22+
- Anthropic API Key (Get one here)
- mkcert (for local HTTPS)
2. Clone and Configure
git clone https://github.com/leonobitech/fullstack-mcp-playground.git
cd fullstack-mcp-playground
cp .env.example .env
# Add your Anthropic API key to .env
3. Setup HTTPS
cd traefik/certs
mkcert "*.localhost" localhost 127.0.0.1 ::1
mv _wildcard.localhost+3.pem dev-local.pem
mv _wildcard.localhost+3-key.pem dev-local-key.pem
cd ../..
4. Start
docker network create leonobitech-net
docker compose up -d --build
5. Access
- Frontend: https://app.localhost
- Core API: https://api.localhost
- Traefik: https://traefik.localhost
π§ Creating New MCP Servers
# Generate new server
./scripts/create-mcp-server.sh weather
cd repositories/mcp-weather
npm install
# Add tools in src/mcp/tools/
# Register in docker-compose.yml and config/mcp-servers.json
π Structure
fullstack-mcp-playground/
βββ config/
β βββ mcp-servers.json # Server registry
βββ repositories/
β βββ core/ # MCP Core Server
β βββ mcp-database/ # MCP Database Server
β βββ mcp-files/ # MCP Files Server
β βββ mcp-template/ # Template
β βββ frontend/ # MCP Host (Next.js)
βββ scripts/
β βββ create-mcp-server.sh # Generator CLI
βββ traefik/ # Proxy config
βββ docker-compose.yml
βββ .env.example
π οΈ Available MCP Servers
Core (mcp-core)
get_health- System healthget_metrics- CPU/memory metricsget_config- Configuration
Database (mcp-database)
query_database- SQL SELECTinsert_record- Insert dataget_database_schema- Table schemas
Files (mcp-files)
example_tool- Template tool
π§ͺ Testing the Application
1. Get Your Anthropic API Key
You need a Claude API key to test the AI agent:
- Go to Anthropic Console
- Sign up or log in (this is separate from Claude Pro subscription)
- Get $5 in free API credits (enough for extensive testing)
- Create an API key
- Add it to your
.envfile:
ANTHROPIC_API_KEY=sk-ant-api03-...
- Restart the frontend container:
docker compose restart frontend
2. Access the Chat Interface
Open your browser and go to:
- Chat: https://app.localhost/chat
- Server Management: https://app.localhost/servers
3. Available Tools - What Really Works
β REAL Functional Tools (mcp-core)
The mcp-core server exposes 3 real tools that interact with the actual running Node.js process:
π©Ί Tool 1: get_health
What it does:
- Returns real-time health status of the mcp-core service
- Shows actual uptime (how long the service has been running)
- Displays real memory usage (heap and RSS)
- Provides timestamp and service name
Real data returned:
{
"status": "healthy",
"uptime": "142s",
"memory": {
"heapUsed": "45MB", // Real heap memory used
"heapTotal": "67MB", // Real total heap allocated
"rss": "89MB" // Real resident set size
},
"timestamp": "2025-10-10T...",
"service": "mcp-core"
}
Example questions to test:
ΒΏCuΓ‘l es el estado de salud del sistema?
How long has the core service been running?
Show me the current memory usage
Is the system healthy?
Check mcp-core health and memory
π Tool 2: get_metrics
What it does:
- Returns real CPU usage metrics from the Node.js process
- Shows actual memory consumption in bytes
- Can filter by metric type:
cpu,memory, orall - Provides timestamp for each reading
Input parameter:
metric(optional):"cpu"|"memory"|"all"(default:"all")
Real data returned:
{
"timestamp": "2025-10-10T...",
"cpu": {
"user": 156789, // Real CPU microseconds in user mode
"system": 34567 // Real CPU microseconds in system mode
},
"memory": {
"heapUsed": 47185920, // Real bytes
"heapTotal": 70254592, // Real bytes
"rss": 93450240, // Real bytes
"external": 1234567 // Real bytes
}
}
Example questions to test:
MuΓ©strame las mΓ©tricas del sistema
What's the current CPU usage?
Get memory metrics only
Show me all metrics
How much memory is the core service using?
Get CPU and memory metrics
βοΈ Tool 3: get_config
What it does:
- Returns actual service configuration from environment variables
- Shows Node.js version, platform, and architecture
- Displays service name, environment, and port
- Returns CORS settings and log level
- Safe: No secrets exposed (passwords, API keys filtered out)
Real data returned:
{
"service": "mcp-core",
"environment": "production",
"port": 3333,
"logLevel": "info",
"corsOrigin": "https://app.localhost",
"version": "0.1.0",
"nodeVersion": "v22.x.x", // Real Node.js version
"platform": "linux", // Real platform (docker)
"arch": "x64" // Real architecture
}
Example questions to test:
ΒΏCuΓ‘l es la configuraciΓ³n del servicio?
What Node.js version is running?
Show me the service configuration
What port is mcp-core using?
Get environment settings
What's the CORS origin configured?
π§ Mock Tools (mcp-database) - Not Yet Functional
| Tool | Status |
|---|---|
query_database | π‘ Returns mock data (TODO: connect real PostgreSQL) |
insert_record | π‘ Returns mock data (TODO: connect real PostgreSQL) |
get_database_schema | π‘ Returns mock data (TODO: connect real PostgreSQL) |
These tools are placeholders. You can use them to test the flow, but they return fake data until a real database is connected.
4. Complete Testing Guide
Copy and paste these prompts into the chat to test each tool:
π§ͺ Test Individual Tools
Test get_health:
ΒΏCuΓ‘l es el estado de salud del sistema? MuΓ©strame el uptime y memoria.
What's the current system health? Show me uptime and memory usage.
Expected: Claude uses get_health β Returns real uptime (e.g., "142s") and memory usage
Test get_metrics:
MuΓ©strame las mΓ©tricas de CPU y memoria del sistema.
Show me the current CPU and memory metrics in detail.
Expected: Claude uses get_metrics β Returns real CPU microseconds and memory bytes
Test get_config:
ΒΏQuΓ© configuraciΓ³n tiene el servicio? ΒΏQuΓ© versiΓ³n de Node estΓ‘ corriendo?
What's the service configuration? What Node.js version is running?
Expected: Claude uses get_config β Returns real Node version, platform, and settings
π Test Advanced Scenarios
Test Multiple Tools in One Request:
Dame un reporte completo del sistema: salud, mΓ©tricas y configuraciΓ³n.
Give me a complete system report with health, metrics, and configuration.
Expected: Claude uses all 3 tools (get_health, get_metrics, get_config) and compiles a comprehensive report
Test Tool with Parameters:
Get only memory metrics, not CPU.
Expected: Claude uses get_metrics with parameter {"metric": "memory"}
Test Conversational Flow:
Check the system health. If memory is over 100MB, also get the full metrics.
Expected: Claude uses get_health first, analyzes the result, then decides whether to call get_metrics
Test in Spanish:
Dime cuΓ‘nto tiempo lleva corriendo el servicio mcp-core y cuΓ‘nta memoria estΓ‘ usando.
Expected: Claude understands Spanish, uses get_health, and responds in Spanish with real data
5. What You Should See
- Message from you appears on the left
- "Thinking..." indicator shows Claude is processing
- Tool execution indicators show which tools Claude is using
- Final response from Claude with the data from the tools
- Tools panel on the right shows all available tools from enabled servers
6. How the Flow Works
You: "Check system health"
β
Frontend β Claude API (with available tools)
β
Claude decides to use: get_health
β
Frontend β MCP Orchestrator β mcp-core server
β
Tool executes: Returns real uptime + memory
β
Frontend β Claude API (with tool result)
β
Claude: "The system has been running for 142 seconds with 45MB heap usage..."
7. Enable/Disable Servers
Go to https://app.localhost/servers to:
- Toggle servers on/off
- See which tools each server provides
- Watch the available tools in the chat update in real-time
Example: Disable mcp-core β Chat now has only database tools (mock)
8. Cost Estimate
Testing is very cheap:
- ~$0.002 per conversation (including tool calls)
- $5 free credits = ~2,500 test conversations
- Most prompts cost less than 1 cent
π Next Steps
Now that you've verified the end-to-end flow works:
- Create real MCP servers (weather, email, calendar, etc.)
- Replace mock database tools with real PostgreSQL queries
- Build custom tools specific to your use case
- Toggle servers to give the agent different capabilities
The "gallery" concept is ready: create new servers, enable/disable them from the UI, and watch your AI agent gain new superpowers!
π³ Docker Commands
docker compose up -d # Start
docker compose up -d --build # Rebuild
docker compose logs -f mcp-core # Logs
docker compose down # Stop
π Environment Variables
| Variable | Description |
|---|---|
ANTHROPIC_API_KEY | Claude API key (required) |
FRONTEND_DOMAIN | Frontend hostname |
BACKEND_DOMAIN | Backend hostname |
DATABASE_URL | Database connection |
π License
MIT Β© 2025 β Leonobitech
π₯· Leonobitech Dev Team
www.leonobitech.com
Made with π§ and AI love π€