MCP Hub
Back to servers

openchrome-mcp

Validation Failed

Open-source browser automation MCP server. Control your real Chrome from any AI agent.

Stars
12
Updated
Feb 24, 2026
Validated
Feb 26, 2026

Validation Error:

Process exited with code 1. stderr: Usage: openchrome [options] [command] MCP server for parallel Claude Code browser sessions via CDP Options: -V, --version output the version number -h, --help display help for command Commands: install [options] [DEPRECATED] Extension install is no longer needed. Use CDP mode instead. uninstall [DEPRECATED] No longer needed - CDP mode has no extension to uninstall s

Quick Install

npx -y openchrome-mcp

OpenChrome Raptor

OpenChrome

Smart. Fast. Parallel.
Browser automation MCP server that uses your real Chrome.

npm Latest Release Release Date MIT

Traditional vs OpenChrome


What is OpenChrome?

Imagine 20+ parallel Playwright sessions — but already logged in to everything, invisible to bot detection, and sharing one Chrome process at 300MB. That's OpenChrome.

Search across 20 sites simultaneously. Crawl authenticated dashboards in seconds. Debug production UIs with real user sessions. Connect to OpenClaw and give your AI agent browser superpowers across Telegram, Discord, or any chat platform.

You: oc compare "AirPods Pro" prices across Amazon, eBay, Walmart,
     Best Buy, Target, Costco, B&H, Newegg — find the lowest

AI:  [8 parallel workers, all sites simultaneously]
     Best Buy:  $179 ← lowest (sale)
     Amazon:    $189
     Costco:    $194 (members)
     ...
     Time: 2.8s | All prices from live pages, already logged in.
TraditionalOpenChrome
5-site task~250s (login each)~3s (parallel)
Memory~2.5 GB (5 browsers)~300 MB (1 Chrome)
AuthEvery timeNever
Bot detectionFlaggedInvisible

Guided, Not Guessing

The bottleneck in browser automation isn't the browser — it's the LLM thinking between each step. Every tool call costs 5–15 seconds of inference time. When an AI agent guesses wrong, it doesn't just fail — it spends another 10 seconds thinking about why, then another 10 seconds trying something else.

Playwright agent checking prices on 5 sites:

  Site 1:  launch browser           3s
           navigate                  2s
           ⚡ bot detection          LLM thinks... 12s → retry with UA
           ⚡ CAPTCHA                LLM thinks... 10s → stuck, skip
           navigate to login         2s
           ⚡ no session             LLM thinks... 12s → fill credentials
           2FA prompt               LLM thinks... 10s → stuck
           ...
           finally reaches product   after ~20 LLM calls, ~4 minutes

  × 5 sites, sequential  =  ~100 LLM calls,  ~20 minutes,  ~$2.00

  Actual work: 5 calls.  Wasted on wandering: 95 calls.

OpenChrome eliminates this entirely — your Chrome is already logged in, and the hint engine corrects mistakes before they cascade:

OpenChrome agent checking prices on 5 sites:

  All 5 sites in parallel:
    navigate (already authenticated)     1s
    read prices                          2s
    ⚡ stale ref on one site
      └─ Hint: "Use read_page for fresh refs"    ← no guessing
    read_page → done                     1s

  = ~20 LLM calls,  ~15 seconds,  ~$0.40

The hint engine watches every tool call across 6 layers — error recovery, composite suggestions, repetition detection, sequence detection, learned patterns, and success guidance. When it sees the same error→recovery pattern 3+ times, it promotes it to a permanent rule across sessions.

PlaywrightOpenChromeSavings
LLM calls~100~2080% fewer
Wall time~20 min~15 sec80x faster
Token cost~$2.00~$0.405x cheaper
Wasted calls~95%~0%

Quick Start

npx openchrome-mcp setup

That's it. Say oc to your AI agent.

Manual config

Claude Code:

claude mcp add openchrome -- npx -y openchrome-mcp serve --auto-launch

VS Code / Copilot (.vscode/mcp.json):

{
  "servers": {
    "openchrome": {
      "type": "stdio",
      "command": "npx",
      "args": ["-y", "openchrome-mcp", "serve", "--auto-launch"]
    }
  }
}

Cursor / Windsurf / Other MCP clients:

{
  "mcpServers": {
    "openchrome": {
      "command": "npx",
      "args": ["-y", "openchrome-mcp", "serve", "--auto-launch"]
    }
  }
}

Examples

Parallel monitoring:

oc screenshot AWS billing, GCP console, Stripe, and Datadog — all at once
→ 4 workers, 3.1s, already authenticated everywhere

Multi-account:

oc check orders on personal and business Amazon accounts simultaneously
→ 2 workers, isolated sessions, same site different accounts

Competitive intelligence:

oc compare prices for "AirPods Pro" across Amazon, eBay, Walmart, Best Buy
→ 4 workers, 4 sites, 2.4s, works past bot detection

47 Tools

CategoryTools
Navigate & Interactnavigate, click_element, fill_form, wait_and_click, find, computer
Read & Extractread_page, page_content, javascript_tool, selector_query, xpath_query
Environmentemulate_device, geolocation, user_agent, network
Storage & Debugcookies, storage, console_capture, performance_metrics, request_intercept
Parallel Workflowsworkflow_init, workflow_collect, worker_create, batch_execute
Memorymemory_record, memory_query, memory_validate
Full tool list (47)

navigate computer read_page find click_element wait_and_click form_input fill_form javascript_tool page_reload page_content page_pdf wait_for user_agent geolocation emulate_device network selector_query xpath_query cookies storage console_capture performance_metrics request_intercept drag_drop file_upload http_auth worker_create worker_list worker_update worker_complete worker_delete tabs_create_mcp tabs_context_mcp tabs_close workflow_init workflow_status workflow_collect workflow_collect_partial workflow_cleanup execute_plan batch_execute lightweight_scroll memory_record memory_query memory_validate oc_stop


CLI

oc setup                    # Auto-configure
oc serve --auto-launch      # Start server
oc serve --headless-shell   # Headless mode
oc doctor                   # Diagnose issues

Cross-Platform

PlatformStatus
macOSFull support
WindowsFull support (taskkill process cleanup)
LinuxFull support (Snap paths, CHROME_PATH env, --no-sandbox for CI)

DOM Mode (Token Efficient)

read_page supports three output modes:

ModeOutputTokensUse Case
ax (default)Accessibility tree with ref_N IDsBaselineScreen readers, semantic analysis
domCompact DOM with backendNodeId~5-10x fewerClick, fill, extract — most tasks
cssCSS diagnostic info (variables, computed styles, framework detection)MinimalDebugging styles, Tailwind detection

DOM mode example:

read_page tabId="tab1" mode="dom"

[page_stats] url: https://example.com | title: Example | scroll: 0,0 | viewport: 1920x1080

[142]<input type="search" placeholder="Search..." aria-label="Search"/>
[156]<button type="submit"/>Search
[289]<a href="/home"/>Home
[352]<h1/>Welcome to Example

DOM mode outputs [backendNodeId] as stable identifiers — they persist for the lifetime of the DOM node, unlike ref_N IDs which are cleared on each AX-mode read_page call.


Stable Selectors

Action tools that accept a ref parameter (form_input, computer, etc.) support three identifier formats:

FormatExampleSource
ref_Nref_5From read_page AX mode (ephemeral)
Raw integer142From read_page DOM mode (stable)
node_Nnode_142Explicit prefix form (stable)

Backward compatible — existing ref_N workflows work unchanged. DOM mode's backendNodeId eliminates "ref not found" errors caused by stale references.


Session Persistence

Headless mode (--headless-shell) doesn't persist cookies across restarts. Enable storage state persistence to maintain authenticated sessions:

oc serve --persist-storage                         # Enable persistence
oc serve --persist-storage --storage-dir ./state    # Custom directory

Cookies and localStorage are saved atomically every 30 seconds and restored on session creation.


Benchmarks

Measure token efficiency and parallel performance:

npm run benchmark                                    # Stub mode: AX vs DOM token efficiency (interactive)
npm run benchmark:ci                                 # Stub mode: AX vs DOM with JSON + regression detection
npm run benchmark -- --mode real                     # Real mode: actual MCP server (requires Chrome)
npx ts-node tests/benchmark/run-parallel.ts          # Stub mode: all parallel benchmark categories
npx ts-node tests/benchmark/run-parallel.ts --mode real --category batch-js --runs 1  # Real mode
npx ts-node tests/benchmark/run-parallel.ts --mode real --category realworld --runs 1  # Real-world benchmarks

By default, benchmarks run in stub mode — measuring protocol correctness and tool-call counts with mock responses. Use --mode real to spawn an actual MCP server subprocess and measure real performance (requires Chrome to be available).

Parallel benchmark categories:

CategoryWhat It Measures
Multi-step interactionForm fill + click sequences across N parallel pages
Batch JS executionN × javascript_tool vs 1 × batch_execute
Compiled plan executionSequential agent tool calls vs single execute_plan
Streaming collectionBlocking vs workflow_collect_partial
Init overheadSequential tabs_create vs batch workflow_init
Fault toleranceCircuit breaker recovery speed
Scalability curveSpeedup efficiency at 1–50x concurrency
Real-worldMulti-site crawl, heavy JS, pipeline, scalability with public websites (httpbin.org, jsonplaceholder, example.com) — NOT included in all, requires network

Development

git clone https://github.com/shaun0927/openchrome.git
cd openchrome
npm install && npm run build && npm test

License

MIT

Reviews

No reviews yet

Sign in to write a review