Velociraptor MCP Server
Built by: mgreen27/mcp-velociraptor
Added onto by: Snoe Findley
The Velociraptor Model Context Protocol (MCP) Server is an integration interface designed for digital forensics and incident response (DFIR). It enables LLM frameworks (such as Claude, Gemini, Open WebUI, and n8n) to interface programmatically with the Velociraptor endpoint monitoring engine.
Overview
This MCP server exposes standard Velociraptor capabilities to AI agents, allowing them to:
- Conduct file and memory scans using YARA.
- Execute remediation actions, including network isolation and process termination.
- Perform artifact collection across Windows, macOS, and Linux (e.g., MFT parsing, Event Log extraction, USN Journal analysis, and process memory inspection).
Security Note: The server enforces data limits and VQL string sanitization to mitigate token-overflows and prevent VQL injection. It relies on standard, built-in Velociraptor artifacts to ensure endpoint stability rather than executing arbitrary command-line scripts.
1. Prerequisites
A running Velociraptor server and corresponding API configuration file are required. Additionally:
- For Local (stdio) deployments: Python 3.10+ must be installed.
- For Network (SSE) deployments: Docker and Docker Compose must be installed.
- Create a dedicated API user on your Velociraptor server:
velociraptor --config /etc/velociraptor/server.config.yaml config api_client --name mcp_agent --role administrator,api api_client.yaml - Copy
api_client.yamlto the root directory of this project. - Rename
.env.exampleto.envand map the config environment variable:VELOCIRAPTOR_API_CONFIG=api_client.yaml
2. Local Deployment (stdio)
Deploying via standard input/output (stdio) is the recommended configuration for local desktop clients such as Claude Desktop, Claude Code, or the Gemini CLI.
Setup
- Ensure Python 3.10+ is installed.
- Initialize and activate a virtual environment:
python -m venv .venv # Windows .venv\Scripts\activate # Mac/Linux source .venv/bin/activate - Install the dependencies:
pip install -r requirements.txt
Connecting Your Client
Add the connection into your MCP client's configuration file using the stdio transport. Point it directly to your virtual environment's python executable:
{
"mcpServers": {
"velociraptor-dfir": {
"command": "/path/to/repo/.venv/bin/python",
"args": ["/path/to/repo/mcp_velociraptor_bridge.py"]
}
}
}
3. Network Deployment (Docker / SSE)
For server-based platforms like Open WebUI or n8n, the MCP server can be exposed over the local network using Server-Sent Events (SSE) via the included Docker Compose configuration.
Setup
This repository utilizes the FastMCP HTTP server to provide an SSE REST endpoint.
- Ensure
api_client.yamland.envare placed in the root directory. - Build and start the Docker container:
docker compose up -d - The server will be accessible via SSE at
http://<your-host-ip>:8088/sse.
⚠️ Security & Hardening Requirements
[!CAUTION] This MCP Server operates with Administrator privileges within Velociraptor. It has the capability to terminate processes, retrieve files, and access sensitive endpoint data across the deployment.
When exposing the server over a network via Docker:
- No Native Authentication: The built-in SSE server does not provide robust HTTP authentication mechanisms.
- Exposure Reduction: Do not expose port
8088to the public internet or untrusted networks. - Network Proxies: You must deploy a reverse proxy (e.g., Nginx, Traefik) in front of the container to enforce Mutual TLS (mTLS), strict IP-allowlisting, or equivalent network-level authentication. Traffic should be explicitly restricted to authorized LLM consumption nodes.
Documentation
For a comprehensive list of supported macOS, Linux, and Windows forensic tools, configuration details, and architecture diagrams, please reference the Comprehensive Documentation.