MCP Hub
Back to servers

browse-ai

Evidence-backed web research for AI agents. Real-time search with cited claims, confidence scores, and compare mode showing raw LLM hallucination vs evidence-backed answers.

glama
Stars
3
Forks
1
Updated
Mar 9, 2026
Validated
Mar 10, 2026

BrowseAI Dev

npm PyPI License: MIT Discord

Reliable research infrastructure for AI agents — real-time web search, evidence extraction, and structured citations. Every claim is backed by a URL. Every answer has a confidence score.

Agent → BrowseAI → Internet → Verified answers + sources

Website · Playground · API Docs · Discord


How It Works

search → fetch pages → extract claims → build evidence graph → cited answer

Every answer goes through a 5-step verification pipeline. No hallucination. Every claim is backed by a real source. Confidence scores are evidence-based — computed from source count, domain diversity, claim grounding, and citation depth.

Quick Start

Python SDK

pip install browseai
from browseai import BrowseAI

client = BrowseAI(api_key="bai_xxx")

# Research with citations
result = client.ask("What is quantum computing?")
print(result.answer)
print(f"Confidence: {result.confidence:.0%}")
for source in result.sources:
    print(f"  - {source.title}: {source.url}")

Framework integrations:

pip install browseai[langchain]   # LangChain tools
pip install browseai[crewai]      # CrewAI integration
# LangChain
from browseai.integrations.langchain import BrowseAIAskTool
tools = [BrowseAIAskTool(api_key="bai_xxx")]

# CrewAI
from browseai.integrations.crewai import BrowseAITool
researcher = Agent(tools=[BrowseAITool(api_key="bai_xxx")])

MCP Server (Claude Desktop, Cursor, Windsurf)

npx browse-ai setup

Or manually add to your MCP config:

{
  "mcpServers": {
    "browse-ai": {
      "command": "npx",
      "args": ["-y", "browse-ai"],
      "env": {
        "SERP_API_KEY": "your-search-key",
        "OPENROUTER_API_KEY": "your-llm-key"
      }
    }
  }
}

REST API

# With your own keys (BYOK — free, no limits)
curl -X POST https://browseai.dev/api/browse/answer \
  -H "Content-Type: application/json" \
  -H "X-Tavily-Key: tvly-xxx" \
  -H "X-OpenRouter-Key: sk-or-xxx" \
  -d '{"query": "What is quantum computing?"}'

# With a BrowseAI API key
curl -X POST https://browseai.dev/api/browse/answer \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer bai_xxx" \
  -d '{"query": "What is quantum computing?"}'

Self-Host

git clone https://github.com/BrowseAI-HQ/BrowserAI-Dev.git
cd BrowserAI-Dev
pnpm install
cp .env.example .env
# Fill in: SERP_API_KEY, OPENROUTER_API_KEY
pnpm dev

API Keys

Three ways to authenticate:

MethodHowLimits
BYOK (recommended)Pass X-Tavily-Key and X-OpenRouter-Key headersUnlimited, free
BrowseAI API KeyPass Authorization: Bearer bai_xxxUnlimited (uses your stored keys)
DemoNo auth needed5 queries/hour per IP

Get a BrowseAI API key from the dashboard — it bundles your Tavily + OpenRouter keys into one key for CLI, MCP, and API use.

Project Structure

/apps/api              Fastify API server (port 3001)
/apps/mcp              MCP server (stdio transport, npm: browse-ai)
/packages/shared       Shared types, Zod schemas, constants
/packages/python-sdk   Python SDK (PyPI: browseai)
/src                   React frontend (Vite, port 8080)
/supabase              Database migrations

API Endpoints

EndpointDescription
POST /browse/searchSearch the web
POST /browse/openFetch and parse a page
POST /browse/extractExtract structured claims from a page
POST /browse/answerFull pipeline: search + extract + cite
POST /browse/compareCompare raw LLM vs evidence-backed answer
GET /browse/share/:idGet a shared result
GET /browse/statsTotal queries answered
GET /browse/sources/topTop cited source domains
GET /browse/analytics/summaryUsage analytics (authenticated)

MCP Tools

ToolDescription
browse_searchSearch the web for information on any topic
browse_openFetch and parse a web page into clean text
browse_extractExtract structured claims from a page
browse_answerFull pipeline: search + extract + cite
browse_compareCompare raw LLM vs evidence-backed answer

Python SDK

MethodDescription
client.search(query)Search the web
client.open(url)Fetch and parse a page
client.extract(url, query=)Extract claims from a page
client.ask(query)Full pipeline with citations
client.compare(query)Raw LLM vs evidence-backed

Async support: AsyncBrowseAI with the same API.

Examples

See the examples/ directory for ready-to-run agent recipes:

ExampleDescription
research-agent.pySimple research agent with citations
code-research-agent.pyResearch libraries/docs before writing code
hallucination-detector.pyCompare raw LLM vs evidence-backed answers
langchain-agent.pyBrowseAI as a LangChain tool
crewai-research-team.pyMulti-agent research team with CrewAI

Environment Variables

VariableRequiredDescription
SERP_API_KEYYesWeb search API key (Tavily)
OPENROUTER_API_KEYYesLLM API key (OpenRouter)
REDIS_URLNoRedis URL (falls back to in-memory cache)
SUPABASE_URLNoSupabase project URL
SUPABASE_SERVICE_ROLE_KEYNoSupabase service role key
PORTNoAPI server port (default: 3001)

Tech Stack

  • API: Node.js, TypeScript, Fastify, Zod
  • Search: Tavily API
  • Parsing: @mozilla/readability + linkedom
  • AI: Gemini 2.5 Flash via OpenRouter
  • Caching: In-memory with intelligent TTL (time-sensitive queries get shorter TTL)
  • Frontend: React, Tailwind CSS, shadcn/ui, Framer Motion
  • MCP: @modelcontextprotocol/sdk
  • Python SDK: httpx, Pydantic
  • Database: Supabase (PostgreSQL)

Community

Contributing

See CONTRIBUTING.md for setup instructions, coding conventions, and PR process.

License

MIT

Reviews

No reviews yet

Sign in to write a review