MCP Hub
Back to servers

MCP API Gateway

A unified local API gateway providing caching, rate limiting, and full Model Context Protocol compatibility for AI agent integration. It enables users to aggregate multiple API endpoints into a single gateway with built-in observability and customizable eviction strategies.

Updated
Feb 22, 2026

MCP API Gateway

A unified local API gateway with caching, rate limiting, and full MCP (Model Context Protocol) compatibility.

Python Version License Status

Features

  • 🔌 Unified API Aggregation - Manage multiple API endpoints through a single gateway
  • 💾 Multi-Strategy Caching - LRU, LFU, FIFO, and TTL cache eviction policies
  • ⚡ Rate Limiting - Token bucket and sliding window algorithms
  • 🔗 MCP Protocol - Full Model Context Protocol support for AI agent integration
  • 📊 Observability - Built-in statistics and metrics
  • 🔄 Retry Logic - Automatic retry with exponential backoff

Installation

# Clone the repository
git clone https://github.com/bandageok/mcp-api-gateway.git
cd mcp-api-gateway

# Install dependencies
pip install -r requirements.txt

# Or install directly
pip install aiohttp pyyaml

Quick Start

1. Create a Configuration File

python gateway.py --create-config

This creates a config.yaml with sample endpoints:

host: localhost
port: 8080
cache:
  enabled: true
  max_size: 1000
  ttl: 300
  strategy: lru
rate_limit:
  enabled: true
  requests_per_minute: 60
apis:
  - name: github-api
    url: https://api.github.com
    method: GET

2. Run the Gateway

# With config file
python gateway.py -c config.yaml

# Or with command line arguments
python gateway.py --host 0.0.0.0 --port 8080

3. Use the Gateway

# Call an API endpoint
curl http://localhost:8080/api/github-api/users/bandageok

# Check health
curl http://localhost:8080/health

# Get statistics
curl http://localhost:8080/stats

# Clear cache
curl -X DELETE http://localhost:8080/cache/clear

# Get configuration
curl http://localhost:8080/config

MCP Protocol Integration

The gateway provides full MCP protocol support for AI agents:

MCP Tools

{
  "jsonrpc": "2.0",
  "id": 1,
  "method": "tools/list",
  "params": {}
}

Response:

{
  "jsonrpc": "2.0",
  "id": 1,
  "result": {
    "tools": [
      {
        "name": "github-api",
        "description": "Call GET https://api.github.com",
        "inputSchema": {
          "type": "object",
          "properties": {
            "params": {"type": "object"},
            "data": {"type": "object"}
          }
        }
      }
    ]
  }
}

Call a Tool

{
  "jsonrpc": "2.0",
  "id": 2,
  "method": "tools/call",
  "params": {
    "name": "github-api",
    "arguments": {
      "params": {"path": "/users/bandageok"}
    }
  }
}

MCP Resources

{
  "jsonrpc": "2.0",
  "id": 3,
  "method": "resources/list",
  "params": {}
}

Configuration Options

OptionTypeDefaultDescription
hoststringlocalhostHost to bind to
portint8080Port to bind to
debugboolfalseEnable debug mode
log_levelstringINFOLogging level
cache.enabledbooltrueEnable caching
cache.max_sizeint1000Maximum cache entries
cache.ttlint300Cache TTL in seconds
cache.strategystringlruCache strategy (lru/lfu/fifo/ttl)
rate_limit.enabledbooltrueEnable rate limiting
rate_limit.requests_per_minuteint60Rate limit threshold

API Endpoints

EndpointMethodDescription
/GETHealth check
/healthGETDetailed health status
/statsGETGateway statistics
/configGETCurrent configuration
/cache/clearDELETEClear the cache
/api/{name}*Proxy to configured API
/mcpPOSTMCP protocol endpoint

Architecture

┌─────────────────────────────────────────────────────────────┐
│                      MCP API Gateway                         │
├─────────────────────────────────────────────────────────────┤
│  ┌─────────────┐    ┌─────────────┐    ┌───────────────┐  │
│  │   Cache     │    │Rate Limiter │    │  MCP Handler  │  │
│  │  (LRU/LFU)  │    │   (Token)   │    │               │  │
│  └─────────────┘    └─────────────┘    └───────────────┘  │
├─────────────────────────────────────────────────────────────┤
│                     API Client Pool                          │
├─────────────────────────────────────────────────────────────┤
│  ┌──────────┐  ┌──────────┐  ┌──────────┐  ┌──────────┐  │
│  │ GitHub   │  │ Weather  │  │  Stocks  │  │  Custom  │  │
│  │    API   │  │    API   │  │    API   │  │    API   │  │
│  └──────────┘  └──────────┘  └──────────┘  └──────────┘  │
└─────────────────────────────────────────────────────────────┘

Use Cases

1. AI Agent Integration

Connect AI agents to external APIs through MCP:

import requests

# Initialize MCP
response = requests.post("http://localhost:8080/mcp", json={
    "jsonrpc": "2.0",
    "id": 1,
    "method": "initialize",
    "params": {}
})

# List available tools
response = requests.post("http://localhost:8080/mcp", json={
    "jsonrpc": "2.0",
    "id": 2,
    "method": "tools/list",
    "params": {}
})

2. API Rate Limiting

Protect external APIs from being overwhelmed:

rate_limit:
  enabled: true
  requests_per_minute: 60  # Max 60 requests per minute

3. Response Caching

Cache expensive API responses:

cache:
  enabled: true
  max_size: 1000
  ttl: 300  # Cache for 5 minutes
  strategy: lru  # Evict least recently used

Examples

Python Client

import aiohttp
import asyncio

async def call_gateway():
    async with aiohttp.ClientSession() as session:
        # Call an API
        async with session.get("http://localhost:8080/api/github-api/users/bandageok") as resp:
            data = await resp.json()
            print(data)
        
        # Check stats
        async with session.get("http://localhost:8080/stats") as resp:
            stats = await resp.json()
            print(f"Cache hit rate: {stats['cache_hit_rate']}")

asyncio.run(call_gateway())

Add Custom API Endpoint

apis:
  - name: my-api
    url: https://api.example.com
    method: GET
    headers:
      Authorization: Bearer YOUR_TOKEN
    timeout: 30
    retry_count: 3

Performance

  • Throughput: ~1000 requests/second (with caching)
  • Latency: <10ms overhead (cache hit), <100ms overhead (cache miss)
  • Memory: ~50MB base + cache size

License

MIT License - See LICENSE for details.

Author

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.


⭐ Star us on GitHub if you find this useful!

Reviews

No reviews yet

Sign in to write a review