OmniAudit MCP
Single self-hosted MCP connector that ships three production modules:
AuditLens(auditlens.*): GitHub PR/run evidence triage and issue drafting.SiteLint(sitelint.*): live URL audit and report generation.Release Butler(releasebutler.*): release asset checksums and release-note workflows.
It exposes one MCP endpoint at POST /mcp and includes a lightweight dashboard at GET /ui.
Features implemented
- MCP tools across all required namespaces (
auditlens,sitelint,releasebutler,core). - Backward-compatible
releasebutler.generate_notesextensions:- optional
from_tag - optional
to_tag - optional
fallback_window
- optional
releasebutler.create_releaselocal file asset upload (assets[]) with per-file outcome metadata:uploaded_assets[]failed_assets[]
- Object storage hardening:
localbackends3backend with dual-read behavior (new writes to S3, legacy local refs still readable)
- Policy controls:
- repo write allowlist
- URL allowlist/denylist
- write-operation confirmation token gate
- rate limiting for scan submissions and GitHub writes
- Receipt-first write operations with immutable output references.
- Append-only audit log storage with hashed tool inputs.
- Observability baseline:
- structured logs (
LOG_FORMAT=json|plain) - optional OTLP traces (
OTEL_ENABLED=true) - Prometheus
/metricsendpoint (PROMETHEUS_ENABLED=true)
- structured logs (
- GitHub auth providers:
- fine-grained PAT
- GitHub App installation token flow
- S3/MinIO-compatible object storage support for artifacts and reports.
- SQL storage for jobs, receipts, audit logs, and secret records.
Repository layout
apps/mcp_server- FastAPI MCP server and dashboard endpointspackages/omniaudit- domain modules, runtime, security, storageservices/worker- Celery worker and async SiteLint tasktests- unit + integration testsinfra- Dockerfiles
Architecture diagram
flowchart LR
ChatGPT -->|MCP JSON-RPC| API["FastAPI /mcp"]
API --> Runtime["MCP Runtime"]
Runtime --> AuditLens["AuditLens"]
Runtime --> SiteLint["SiteLint"]
Runtime --> ReleaseButler["Release Butler"]
Runtime --> Policy["Policy + Risk Gate + Rate Limits"]
Runtime --> Receipts["Receipts + Audit Logs"]
Runtime --> ObjectStore["ObjectStore (Local or S3 Dual-Read)"]
Runtime --> Jobs["JobStore"]
Worker["Celery Worker"] --> SiteLint
SiteLint --> Reports["Reports + Evidence"]
ReleaseButler --> GitHub["GitHub API"]
AuditLens --> GitHub
API --> Metrics["/metrics"]
Support matrix
| Capability | Status | Notes |
|---|---|---|
| MCP tool registry and dispatch | Ready | Backward-compatible tool names and required args maintained |
| Object storage backends | Ready | local and s3 with dual-read fallback |
| Release asset uploads | Ready | Local file path assets supported |
| Live smoke automation | Ready | scripts/smoke_hardening_pass2.sh |
| SiteLint wave 1 optional args | Ready | crawl_budget, entry_paths, auth_profile_id, baseline_scan_id |
| AuditLens wave 1 optional args | Ready | parser_profile, dedupe_strategy, issue drafting extensions |
| Release Butler wave 1 optional args | Ready | group_by, include_pr_links, draft, prerelease, dry_run |
| Governance baseline | Ready | CI workflows, templates, CODEOWNERS, policy docs |
Quickstart (local)
- Create env and install dependencies:
uv venv .venv
uv pip install --python .venv/bin/python -e '.[test]'
- Configure environment:
cp .env.example .env
- Bootstrap local data folders and key:
./scripts/bootstrap.sh
- Run API:
PYTHONPATH=packages:apps:services .venv/bin/uvicorn mcp_server.main:app --host 0.0.0.0 --port 8080
Optional SiteLint browser stack:
./scripts/install_sitelint_toolchain.sh
- Health check:
curl -s http://localhost:8080/healthz
Quickstart (Docker Compose)
cp .env.example .env
./scripts/bootstrap.sh
docker compose up --build
API endpoint: http://localhost:8080/mcp
Metrics endpoint: http://localhost:8080/metrics
ChatGPT Connector setup
In ChatGPT Developer Mode connector modal:
- Name:
OmniAudit MCP - MCP Server URL:
https://<your-host>/mcp - Authentication:
- set to
NoneifMCP_AUTH_MODE=none - set to API key flow if you enable
MCP_AUTH_MODE=api_key
- set to
Storage backend modes
Default mode is local:
OBJECT_STORE_BACKEND=local
S3/MinIO mode with dual-read, S3-write:
OBJECT_STORE_BACKEND=s3
OBJECT_STORE_BUCKET=omniaudit
OBJECT_STORE_PREFIX=omniaudit
S3_ENDPOINT_URL=http://minio:9000
S3_FORCE_PATH_STYLE=true
S3_ACCESS_KEY_ID=minioadmin
S3_SECRET_ACCESS_KEY=minioadmin
Behavior in s3 mode:
- new immutable objects are written to
s3://<bucket>/<prefix>/<sha256>... - existing local file refs continue to resolve via fallback reads
- no migration is required for existing receipt
result_refvalues
MCP examples
List tools:
curl -s http://localhost:8080/mcp -H 'Content-Type: application/json' -d '{
"jsonrpc":"2.0",
"id":1,
"method":"tools/list",
"params":{}
}'
Start site scan:
curl -s http://localhost:8080/mcp -H 'Content-Type: application/json' -d '{
"jsonrpc":"2.0",
"id":2,
"method":"tools/call",
"params":{
"name":"sitelint.start_scan",
"arguments":{
"url":"https://example.com",
"profile":"standard",
"viewport_set":"desktop_mobile"
}
}
}'
Write operation risk gate flow (auditlens.create_issue):
- Call without
confirmation_tokenand readstructuredContent.confirmation_token. - Repeat call with that token to execute write and receive
receipt_id.
Generate tag-to-tag notes:
curl -s http://localhost:8080/mcp -H 'Content-Type: application/json' -d '{
"jsonrpc":"2.0",
"id":3,
"method":"tools/call",
"params":{
"name":"releasebutler.generate_notes",
"arguments":{
"repo":"Prekzursil/AdrianaArt",
"from_tag":"v1.0.0",
"to_tag":"v2.0.0",
"fallback_window":25
}
}
}'
Create release with local assets (confirmation flow):
- First call
releasebutler.create_releasewithoutconfirmation_token. - Re-call with returned
confirmation_tokenand local file paths inassets[]. - Inspect
uploaded_assetsandfailed_assetsinstructuredContent.
Live smoke workflow
Run production-style dual-read/S3-write smoke checks locally:
./scripts/smoke_hardening_pass2.sh
Outputs:
artifacts/smoke/<timestamp>/summary.jsonartifacts/smoke/<timestamp>/responses/*.json
Key assertions:
- legacy local refs remain readable after switching to S3 backend
- new writes become
s3://... - release upload confirmation flow succeeds with local assets
- metrics endpoint exposes hardening counters
Observability
Structured logging:
LOG_FORMAT=json
Optional OTLP tracing:
OTEL_ENABLED=true
OTEL_EXPORTER_OTLP_ENDPOINT=http://otel-collector:4318/v1/traces
Prometheus metrics:
PROMETHEUS_ENABLED=true
Exposed series include:
omniaudit_tool_calls_total{tool,status}omniaudit_tool_latency_seconds{tool}omniaudit_write_gate_denied_total{tool}omniaudit_rate_limit_denied_total{bucket}
Tests
TMPDIR=/tmp TEMP=/tmp TMP=/tmp .venv/bin/pytest -q -s
Notes
sitelint.start_scanruns inline by default for deterministic single-user behavior.- Set
SITELINT_ASYNC_MODE=trueand runworkerservice to process scan jobs via Celery. - GitHub API operations require valid credentials in
.env. - In Docker + MinIO setups, keep
S3_FORCE_PATH_STYLE=truefor compatibility. - For roadmap and release-note policy, see
docs/ROADMAP.mdanddocs/CHANGELOG_POLICY.md.
See docs/ARCHITECTURE.md and docs/OPERATIONS.md for details.