MCP Hub
Back to servers

websearch-mcp

Enables web searching via SearXNG, page content extraction with Crawl4AI, and image analysis using vision language models. It provides AI agents with tools for information synthesis and web-based data retrieval through OpenAI-compatible LLM endpoints.

glama
Updated
Feb 27, 2026

websearch-mcp

An MCP server that provides web search and page fetching tools for AI agents. Uses SearXNG for search, Crawl4AI for content extraction, and any OpenAI-compatible LLM for server-side synthesis.

Prerequisites

  • Python 3.12+
  • SearXNG instance with JSON format enabled (search.formats: [json] in settings.yml)
  • OpenAI-compatible LLM endpoint (OpenAI, Ollama, vLLM, LiteLLM, etc.)

Installation

# Run directly from GitHub
uvx --from "git+https://github.com/<org>/websearch-mcp" websearch-mcp

# Or clone and install locally
git clone https://github.com/<org>/websearch-mcp
cd websearch-mcp
uv sync
uv run websearch-mcp

Tools

web_search

Search the web via SearXNG, fetch top result pages, and synthesize with LLM.

ParameterTypeRequiredDescription
querystringYesSearch query
max_resultsintNoMax results (default: 10)
allowed_domainsstring[]NoOnly include these domains
blocked_domainsstring[]NoExclude these domains

webfetch

Fetch a single URL, extract content, and process with LLM.

ParameterTypeRequiredDescription
urlstringYesURL to fetch
promptstringNoCustom instruction for LLM processing

image-description

Describe an image using a vision language model (VLM). Accepts either base64-encoded image data or an absolute filesystem path to an image file.

ParameterTypeRequiredDescription
imagestringYesBase64-encoded image data or absolute filesystem path

Returns a JSON object with description, success status, and optional error message.

Environment Variables

VariableRequiredDefaultDescription
SEARXNG_URLYesBase URL of SearXNG instance
LLM_BASE_URLYesOpenAI-compatible endpoint base URL
LLM_API_KEYYesAPI key for the LLM endpoint
LLM_MODELYesModel name for chat completions
CACHE_TTL_SECONDSNo900Cache TTL in seconds (0 to disable)
CACHE_MAX_ENTRIESNo1000Max cache entries before LRU eviction
FETCH_TIMEOUTNo30Per-page fetch timeout in seconds
LLM_TIMEOUTNo60LLM request timeout in seconds
MAX_CONTENT_SIZENo5242880Max content size in bytes (5MB)
DEFAULT_MAX_RESULTSNo10Default result count for web_search

VLM Configuration (for image-description tool)

VariableRequiredDefaultDescription
VLM_BASE_URLNoLLM_BASE_URLOpenAI-compatible endpoint for VLM
VLM_API_KEYNoLLM_API_KEYAPI key for VLM endpoint
VLM_MODELNoLLM_MODELModel name for image description
MAX_IMAGE_SIZENo10485760Max image size in bytes (10MB)

Agent Configuration

Claude Desktop (stdio)

{
  "mcpServers": {
    "websearch": {
      "command": "uvx",
      "args": ["--from", "git+https://github.com/<org>/websearch-mcp", "websearch-mcp"],
      "env": {
        "SEARXNG_URL": "http://localhost:8888",
        "LLM_BASE_URL": "http://localhost:11434/v1",
        "LLM_API_KEY": "ollama",
        "LLM_MODEL": "llama3"
      }
    }
  }
}

Generic MCP Config (stdio)

{
  "command": "uvx",
  "args": ["--from", "git+https://github.com/<org>/websearch-mcp", "websearch-mcp"],
  "env": {
    "SEARXNG_URL": "http://localhost:8888",
    "LLM_BASE_URL": "https://api.openai.com/v1",
    "LLM_API_KEY": "sk-...",
    "LLM_MODEL": "gpt-4o-mini"
  }
}

HTTP Transport

websearch-mcp --transport http --port 3000
{
  "url": "http://localhost:3000/mcp"
}

Development

uv sync
uv run pytest tests/ -v

Example Usage

image-description tool

With base64-encoded image:

# Using base64 encoded image data
image_b64 = "iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAYAAAAfFcSJAAAADUlEQVR42mNk+M9QDwADhgGAWjR9awAAAABJRU5ErkJggg=="
result = await image_description(image_b64)
# Returns: {"description": "A small white square", "success": true, "error": null}

With filesystem path:

# Using absolute filesystem path
result = await image_description("/path/to/image.png")
# Returns: {"description": "A detailed description of the image", "success": true, "error": null}

With Ollama (using llava or other VLM):

{
  "mcpServers": {
    "websearch": {
      "command": "uvx",
      "args": ["--from", "git+https://github.com/<org>/websearch-mcp", "websearch-mcp"],
      "env": {
        "SEARXNG_URL": "http://localhost:8888",
        "LLM_BASE_URL": "http://localhost:11434/v1",
        "LLM_API_KEY": "ollama",
        "LLM_MODEL": "llama3",
        "VLM_BASE_URL": "http://localhost:11434/v1",
        "VLM_API_KEY": "ollama",
        "VLM_MODEL": "llava"
      }
    }
  }
}

Reviews

No reviews yet

Sign in to write a review