MCP Hub
Back to servers

mcp

A specialized MCP server built with FastMCP for Marketing Connect AI integrations, facilitating secure AI interactions with marketing tools, data resources, and defined prompts.

Tools
1
Updated
Dec 5, 2025
Validated
Jan 9, 2026

Marketing Connect MCP Services

A Model Context Protocol (MCP) server for Marketing Connect AI integrations.

What is MCP?

The Model Context Protocol (MCP) is an open standard from Anthropic that enables AI models to securely interact with external tools and data sources. This server exposes:

  • Tools: Functions the AI can invoke (like API endpoints)
  • Resources: Data loaded into AI context (like configuration or schemas)
  • Prompts: Reusable interaction templates

What is FastMCP?

FastMCP is a high-level Python framework that simplifies building MCP servers. It provides a decorator-based API similar to FastAPI, reducing boilerplate and accelerating development.

Why Use FastMCP?

BenefitDescription
Minimal BoilerplateSimple decorators like @mcp.tool() replace complex class hierarchies
Automatic Schema GenerationInput/output schemas generated from Python type hints
Built-in HTTP TransportProduction-ready server with health checks and SSE support
Pydantic IntegrationNative support for Pydantic models as tool inputs

MCP SDK vs FastMCP Comparison

Without FastMCP (using low-level MCP SDK):

from mcp.server import Server
from mcp.types import Tool, TextContent
import mcp.server.stdio

server = Server("my-server")

@server.list_tools()
async def list_tools():
    return [
        Tool(
            name="greet",
            description="Greet a user",
            inputSchema={
                "type": "object",
                "properties": {
                    "name": {"type": "string", "description": "User name"}
                },
                "required": ["name"]
            }
        )
    ]

@server.call_tool()
async def call_tool(name: str, arguments: dict):
    if name == "greet":
        return [TextContent(type="text", text=f"Hello {arguments['name']}!")]

async def main():
    async with mcp.server.stdio.stdio_server() as (read, write):
        await server.run(read, write, server.create_initialization_options())

With FastMCP:

from fastmcp import FastMCP

mcp = FastMCP("my-server")

@mcp.tool()
async def greet(name: str) -> str:
    """Greet a user."""
    return f"Hello {name}!"

if __name__ == "__main__":
    mcp.run()

FastMCP reduces ~30 lines to ~10 while maintaining full MCP protocol compliance.

Quick Start

Prerequisites

Install from Devshell:

  • Python 3.11+ (3.13 recommended)
  • make
  • buildi-cli
  • tfl
  • httpie

Installation

# Install uv package manager
make ci-prebuild

# Install all dependencies (creates .venv automatically)
make build

Model Generation

This server uses Pydantic models generated from an OpenAPI specification. The models are generated using datamodel-code-generator.

From Local .tgz (npm-packed OpenAPI spec)

# Command to be run in marketing-connect-spec/marketing-connect-mcp-services (in terminal)
  # Inside path src/main/resources/model/api dir
tar -czvf models.tgz mcpservices.api.yml schema/

# Generate models from a local .tgz file
make generate-models SPEC_TGZ=path/to/models.tgz

From Artifactory URL

# Generate models from a URL (e.g., artifactory)
make generate-models-url SPEC_URL=https://artifactory.example.com/openapi-spec.tgz

From Local YAML File

# Generate models directly from a local OpenAPI YAML file
make generate-models-local SPEC_FILE=path/to/openapi.yaml

Model Management

# Show generated model classes
make models-show

# Clean generated models
make models-clean

The generated models are placed in src/marketing_connect_mcp_services/models/ and can be imported as:

from marketing_connect_mcp_services.models import ProductDetails, GreetingResponse

Running the Server

# Start the server (default: 0.0.0.0:8000)
make run

# Or with debug mode
make run-debug

# Or directly with uv
uv run marketing-connect-mcp --port 3000

Verify Deployment

The server exposes health check endpoints for deployment verification:

EndpointDescription
GET /Service overview
GET /healthHealth check (returns {"status": "UP"})
GET /infoServer metadata (version, config, uptime)
POST /mcpMCP protocol endpoint (for AI clients)
# Check health
curl http://localhost:8000/health

# Get server info
curl http://localhost:8000/info

# Service overview
curl http://localhost:8000/

Testing the MCP Protocol

The MCP endpoint uses the Streamable HTTP transport and requires specific headers:

# Initialize MCP session
curl -X POST http://localhost:8000/mcp \
  -H "Content-Type: application/json" \
  -H "Accept: application/json, text/event-stream" \
  -d '{
    "jsonrpc": "2.0",
    "id": 1,
    "method": "initialize",
    "params": {
      "protocolVersion": "2024-11-05",
      "capabilities": {},
      "clientInfo": {"name": "test-client", "version": "1.0"}
    }
  }'

Expected response (SSE format):

event: message
data: {"jsonrpc":"2.0","id":1,"result":{"protocolVersion":"2024-11-05","capabilities":{...},"serverInfo":{"name":"marketing-connect-mcp-services","version":"..."}}}
# List available tools
curl -X POST http://localhost:8000/mcp \
  -H "Content-Type: application/json" \
  -H "Accept: application/json, text/event-stream" \
  -d '{
    "jsonrpc": "2.0",
    "id": 2,
    "method": "tools/list",
    "params": {}
  }'

Note: The MCP protocol is stateful. The initialize request works without a session, but subsequent requests like tools/list and tools/call require a session ID header (Mcp-Session-Id) from the initialization response. For full protocol testing, use an MCP client library

Project Structure

marketing-connect-mcp-services/
├── src/marketing_connect_mcp_services/
│   ├── __init__.py          # Package exports
│   ├── server.py            # FastMCP server setup
│   ├── config.py            # Pydantic settings
│   ├── cli.py               # CLI entry point
│   ├── tools/               # MCP tools (AI-invokable functions)
│   │   ├── __init__.py
│   │   └── example.py       # Example tool patterns
│   ├── resources/           # MCP resources (context data)
│   │   ├── __init__.py
│   │   └── example.py       # Example resource patterns
│   └── prompts/             # MCP prompts (interaction templates)
│       ├── __init__.py
│       └── example.py       # Example prompt patterns
├── tests/                   # Test suite
├── pyproject.toml           # Hatchling build config + dependencies
├── uv.lock                  # Dependency lock file
├── Makefile                 # Build commands
└── .env.example             # Environment template

Build System

This project uses modern Python tooling:

ToolPurpose
HatchlingBuild backend (PEP 517)
uvFast package manager (10-100x faster than pip)

Why uv?

  • Fast: Written in Rust, installs packages 10-100x faster than pip
  • Lock files: uv.lock ensures reproducible builds
  • Compatible: Works with standard pyproject.toml
  • Simple: Single binary, no plugins needed

Configuration

Configuration is managed via environment variables (prefix: MCP_).

Copy .env.example to .env and customize:

# Server identity
MCP_SERVER_NAME=marketing-connect-mcp-services
MCP_SERVER_VERSION=1.0.0

# HTTP server
MCP_HOST=0.0.0.0
MCP_PORT=8000

# Logging
MCP_DEBUG=false
MCP_LOG_LEVEL=INFO

# Application settings
MCP_BASE_URL=https://your-app-url.com
MCP_REGION=us-east-1

JPMC Artifact Repository

The PyPI index is configured in pyproject.toml:

[tool.uv]
index-url = "https://artifacts-read.gkp.jpmchase.net/artifactory/api/pypi/pypi/simple"

You can also override via environment variable:

export UV_INDEX_URL=https://your-pypi-mirror.com/simple

Development

Testing

# Run tests
make test

# Run tests with coverage
make cover

# Verbose output
make test-verbose

Code Quality

# Format code
make format

# Lint code
make lint

# Auto-fix lint issues
make lint-fix

# Type check
make typecheck

# Run all checks
make check

Pre-commit Hooks

make precommit

Dependency Management

# Update lock file
make lock

# Update all dependencies to latest
make update

# Install production deps only
make build-prod

Adding Custom Integrations

Adding a Tool

Create a new file in tools/ and register it:

# tools/my_tools.py
from marketing_connect_mcp_services.server import mcp

@mcp.tool()
async def my_custom_tool(param: str) -> str:
    """Description the AI will see."""
    return f"Result: {param}"

Then import in server.py:

from marketing_connect_mcp_services.tools import my_tools  # noqa: F401

Adding a Resource

# resources/my_resources.py
from marketing_connect_mcp_services.server import mcp

@mcp.resource("myapp://config")
async def get_config() -> str:
    """Returns configuration data."""
    return "config data"

Adding a Prompt

# prompts/my_prompts.py
from marketing_connect_mcp_services.server import mcp

@mcp.prompt()
async def analysis_prompt(topic: str) -> str:
    """Generate an analysis prompt."""
    return f"Please analyze: {topic}"

CI/CD

# Full CI pipeline (clean, build, test, package)
make ci

# Generate SSAP reports
make ssap

# Build wheel package
make package

Make Targets

TargetDescription
make runStart the MCP server
make run-debugStart with debug logging
make buildInstall all dependencies
make build-prodInstall production deps only
make testRun tests
make coverRun tests with coverage
make formatFormat code
make lintLint code
make typecheckRun mypy type checking
make checkRun lint + typecheck + test
make generate-modelsGenerate Pydantic models from .tgz
make generate-models-urlGenerate models from URL
make generate-models-localGenerate models from local YAML
make models-showShow generated model classes
make models-cleanRemove generated models
make lockUpdate uv.lock
make updateUpdate all dependencies
make ciFull CI pipeline
make ssapGenerate SSAP reports
make packageBuild wheel
make helpShow all targets

Documentation

Reviews

No reviews yet

Sign in to write a review