MCP Hub
Back to servers

mcptube

GitHub
Stars
51
Forks
4
Updated
Apr 13, 2026
Validated
Apr 15, 2026

🎬 mcptube-vision

YouTube video knowledge engine — transcripts, vision, and persistent wiki.

PyPI Python License: MIT

mcptube-vision transforms YouTube videos into a persistent, structured knowledge base using both transcripts and visual frame analysis. Built on the Karpathy LLM Wiki pattern: knowledge compounds with every video you add.

Evolved from mcptube v0.1 — mcptube-vision replaces semantic chunk search with a persistent wiki that gets smarter with every video ingested.


🧠 How It Works

Traditional video tools re-discover knowledge from scratch on every query. mcptube-vision is different:

           mcptube v0.1                    mcptube-vision
    ┌─────────────────────┐         ┌─────────────────────────┐
    │ Query → vector search│         │ Video ingested → LLM     │
    │ → raw chunks → LLM  │         │ extracts knowledge →     │
    │ → answer (from scratch│        │ wiki pages created →     │
    │   every time)        │         │ cross-references built   │
    └─────────────────────┘         │                         │
                                    │ Query → FTS5 + agent    │
                                    │ → reasons over compiled  │
                                    │   knowledge → answer     │
                                    └─────────────────────────┘
v0.1 (Video Search Engine)vision (Video Knowledge Engine)
On ingestChunk transcript, embed in vector DBLLM watches + reads, writes wiki pages
On queryFind similar chunksAgent reasons over compiled knowledge
FramesTimestamp or keyword extractionScene-change detection + vision model
Cross-videoRe-search all chunks each timeConnections already in the wiki
Over timeLibrary of isolated videosCompounding knowledge base

🏗️ Technical Architecture

mcptube-vision is built around a core insight: video knowledge should compound, not be re-discovered. Every architectural decision flows from this principle.

System Overview

flowchart TD
    YT[YouTube URL] --> EXT[YouTubeExtractor\ntranscript + metadata]
    EXT --> FRAMES[SceneFrameExtractor\nffmpeg scene-change detection]
    FRAMES --> VISION[VisionDescriber\nLLM vision model]
    VISION --> WIKI_EXT[WikiExtractor\nLLM knowledge extraction]
    EXT --> WIKI_EXT
    WIKI_EXT --> WIKI_ENG[WikiEngine\nmerge + update]
    WIKI_ENG --> FILE[FileWikiRepository\nJSON pages on disk]
    WIKI_ENG --> FTS[SQLite FTS5\nsearch index]
    FILE --> AGENT[Ask Agent\nFTS5 → LLM reasoning]
    FTS --> AGENT
    FILE --> CLI[CLI / MCP Server]
    FTS --> CLI

    subgraph Ingestion Pipeline
        EXT
        FRAMES
        VISION
        WIKI_EXT
    end

    subgraph Knowledge Store
        WIKI_ENG
        FILE
        FTS
    end

    subgraph Retrieval
        AGENT
    end

The system overview shows three distinct subsystems connected by a unidirectional data flow. The Ingestion Pipeline (left) transforms a raw YouTube URL into structured knowledge through four stages: transcript extraction, scene-change frame detection, vision-model description, and LLM-powered knowledge extraction. Each stage enriches the signal — raw video becomes text, text becomes typed knowledge objects.

The Knowledge Store (center) is the persistent layer. The WikiEngine applies merge semantics — deciding whether to create new pages or append to existing ones — then writes JSON files to disk and updates the FTS5 search index in parallel. These two stores serve different access patterns: files for full-page reads and exports, FTS5 for sub-millisecond keyword retrieval.

The Retrieval layer (right) combines both stores. The Ask Agent first narrows via FTS5, then loads full pages from disk, and finally reasons over candidates with structural awareness from the wiki TOC. The CLI and MCP Server sit alongside as thin presentation layers — they never contain business logic.


Ingestion Flow

sequenceDiagram
    participant User
    participant CLI
    participant YouTubeExtractor
    participant SceneFrameExtractor
    participant VisionDescriber
    participant WikiExtractor
    participant WikiEngine
    participant FileRepo
    participant FTS5

    User->>CLI: mcptube add <url>
    CLI->>YouTubeExtractor: fetch transcript + metadata
    YouTubeExtractor-->>CLI: segments, duration, channel

    CLI->>SceneFrameExtractor: extract scene frames (ffmpeg)
    SceneFrameExtractor-->>CLI: frame images (scene_000x.jpg)

    CLI->>VisionDescriber: describe frames (LLM vision)
    VisionDescriber-->>CLI: frame descriptions (prose)

    CLI->>WikiExtractor: extract knowledge\n(transcript + frame descriptions)
    WikiExtractor-->>CLI: entities, topics, concepts, video page

    CLI->>WikiEngine: merge into wiki
    WikiEngine->>FileRepo: write/update JSON pages\n(append entities, rewrite synthesis)
    WikiEngine->>FTS5: update search index
    FileRepo-->>WikiEngine: ✅
    FTS5-->>WikiEngine: ✅
    WikiEngine-->>CLI: wiki processed
    CLI-->>User: ✅ Added + Wiki: full_analysis

The ingestion flow is a write-once pipeline — LLM-heavy at ingest time, but never repeated for the same video. This is the key cost tradeoff: invest tokens upfront to build compiled knowledge, so retrieval is cheap.

The sequence shows two critical branching points. First, after transcript extraction, the pipeline forks into vision processing (scene frames → LLM vision descriptions) and feeds both streams into the WikiExtractor. This dual-signal approach means the LLM sees both what was said and what was shown — critical for content like coding tutorials or slide-based lectures where the transcript alone misses visual information.

Second, the WikiEngine merge step is where knowledge compounding happens. Rather than blindly writing new pages, it checks for existing entities, topics, and concepts — appending new video contributions to existing pages and rewriting synthesis summaries. This is why ingesting video #10 makes the wiki smarter about videos #1–9 too: shared concepts get richer synthesis with each new source.

The final FTS5 index update runs synchronously after the file write, ensuring search consistency. There is no eventual-consistency window — once add_video returns, all new knowledge is immediately searchable.


Retrieval Flow

sequenceDiagram
    participant User
    participant CLI
    participant FTS5
    participant FileRepo
    participant Agent

    User->>CLI: mcptube ask "What is RLHF?"

    CLI->>FTS5: keyword search (sanitized query)
    FTS5-->>CLI: candidate page slugs (ranked)

    CLI->>FileRepo: load candidate pages (JSON)
    FileRepo-->>CLI: wiki pages (entities, topics, concepts)

    CLI->>FileRepo: load wiki TOC
    FileRepo-->>CLI: table of contents (all page titles + types)

    CLI->>Agent: candidates + TOC + question
    Agent-->>CLI: reasoned answer with source citations

    CLI-->>User: answer + (source-slug) citations

The retrieval flow is deliberately two-stage to balance cost and intelligence. The first stage — FTS5 keyword search — runs entirely locally with zero LLM tokens, narrowing thousands of wiki pages to a ranked handful in milliseconds. Query sanitization strips special characters (e.g. ?, !) that would break FTS5 syntax, ensuring robustness for natural-language questions.

The second stage loads two types of context for the agent: the candidate pages (full detail — summaries, contributions, entity references) and the wiki TOC (a compact structural map of all knowledge). The TOC is critical — it gives the agent awareness of what it doesn't know. Without it, the agent would hallucinate answers from weak matches. With it, the agent can reason: "The wiki has pages on RLHF and scaling laws, but nothing on quantum computing — so I should say I don't have that information."

In CLI mode (BYOK), the agent is an LLM call that synthesizes the final answer with source citations. In MCP server mode (passthrough), this stage returns the raw candidates and TOC to the client — letting the client's own model (Copilot, Claude, Gemini) do the reasoning. This dual-mode design means the server never requires an API key when used via MCP.


Subsystem Breakdown

1. Ingestion Pipeline

YouTubeExtractor pulls transcript segments via youtube-transcript-api and video metadata via yt-dlp. Transcripts are chunked by natural segment boundaries, not fixed token windows — preserving semantic coherence.

SceneFrameExtractor uses ffmpeg's perceptual scene-change filter (select='gt(scene,{threshold})') rather than fixed-interval sampling. This is deliberate: fixed intervals waste tokens on static frames (slides held for 30s), while scene-change detection captures transitions — the moments of highest information density. The threshold (default 0.4) is configurable.

VisionDescriber sends detected frames to a vision-capable LLM (GPT-4o, Claude, Gemini — auto-detected via API key priority). Frame descriptions are plain prose, not structured JSON, to maximise the LLM's descriptive latitude.

Why this matters: A transcript of a coding tutorial misses the code on screen. Scene-change vision capture recovers that signal without the token cost of dense fixed-interval sampling.


2. WikiEngine — The Novel Core ⭐

Inspired by the Karpathy LLM Wiki pattern, this is the most architecturally distinctive component.

WikiExtractor takes the combined transcript + frame descriptions and prompts an LLM to extract four typed knowledge objects:

TypeSemanticsUpdate Policy
videoImmutable per-video summary + timestampsWrite-once
entityPeople, tools, companiesAppend-only — new references added, never overwritten
topicBroad themes (e.g. "Scaling Laws")Synthesis rewritten; per-video contributions immutable
conceptSpecific ideas (e.g. "RLHF")Synthesis rewritten; per-video contributions immutable

WikiEngine handles merge semantics — when a new video references an existing entity or concept, it integrates the new evidence without destroying prior contributions. This is a CRDT-like append model for knowledge, not a vector store replacement index.

Why this matters: Vector stores are retrieval indexes — they don't synthesize. Two videos about "attention mechanisms" produce two isolated chunks. The WikiEngine merges them into a single concept-attention-mechanisms page with a synthesis that evolves as evidence accumulates. Knowledge compounds.

Version history is maintained for all non-immutable pages — every synthesis rewrite is snapshotted, enabling full auditability.


3. Storage Layer

FileWikiRepository stores wiki pages as JSON on disk, one file per page. Chosen over a document DB deliberately:

  • Human-readable and git-diffable
  • Trivially exportable to markdown/HTML
  • Schema evolution without migrations

SQLite FTS5 maintains a parallel search index over page titles, tags, and content. Chosen over a vector store because:

  • Zero embedding cost at query time
  • Deterministic, auditable results
  • Sub-millisecond latency at thousands of pages

Why not ChromaDB/Pinecone? At wiki scale, BM25-style keyword search over compiled knowledge pages outperforms semantic similarity over raw chunks — the wiki pages are already semantically rich by construction.


4. Hybrid Retrieval Agent ⭐

The ask command uses a deliberate two-stage pattern:

  1. FTS5 keyword search — narrows the full wiki to a small candidate set (milliseconds, zero LLM cost)
  2. LLM agent — receives candidates + the wiki table of contents, reasons about relevance, synthesizes a grounded answer with source citations

Why this matters over RAG: Standard RAG retrieves chunks and generates. The agent here retrieves compiled knowledge pages and reasons. The wiki TOC gives the agent structural awareness of what knowledge exists — enabling it to correctly say "I don't have information about X" rather than hallucinating from weak chunk matches.


5. MCP Server

Exposes all subsystems as tools consumable by any MCP-compatible client. Report and synthesis tools use a passthrough pattern — returning structured data for the client's own LLM to analyse, rather than making a second LLM call server-side. This avoids double-billing and lets the client model apply its own reasoning style.


Key Design Decisions

DecisionAlternative ConsideredReason
Scene-change frame extractionFixed-interval samplingHigher signal/token ratio
Wiki knowledge modelVector store chunksKnowledge compounds; no re-discovery per query
FTS5 retrievalEmbedding similarityCompiled wiki pages are already semantic
File-based wiki storageSQLite/document DBHuman-readable, git-diffable, zero migrations
Append-only entity updatesFull rewriteSource attribution preserved; full auditability
Passthrough MCP reportsServer-side LLMAvoids double-billing; client model reasons

✨ Features

FeatureCLIMCP Server
Add/remove YouTube videos
Wiki knowledge base (auto-built)
Scene-change frame extraction + vision analysis
Full-text wiki search (FTS5)
Agentic Q&A over wiki
Browse wiki pages (entities, topics, concepts)
Wiki version history
Wiki export (markdown, HTML)
Illustrated reports (single & cross-video)✅ (BYOK)✅ (passthrough)
YouTube discovery + clustering✅ (BYOK)
Cross-video synthesis✅ (BYOK)✅ (passthrough)
Text-only processing mode

BYOK = Bring Your Own Key (Anthropic, OpenAI, or Google) Passthrough = The MCP client's own LLM does the analysis


📦 Installation

Prerequisites

  • Python 3.12 or 3.13
  • ffmpeg — required for frame extraction (install guide)

Recommended: pipx

pipx install mcptube --python python3.12

Alternative: pip

python3.12 -m venv venv
source venv/bin/activate
pip install mcptube

Verify installation

mcptube --help

🚀 Quick Start

# 1. Add a video (builds wiki automatically)
mcptube add "https://www.youtube.com/watch?v=dQw4w9WgXcQ"

# 2. Add with text-only processing (cheaper, faster)
mcptube add "https://www.youtube.com/watch?v=abc123" --text-only

# 3. Browse the wiki
mcptube wiki list
mcptube wiki show "video-dQw4w9WgXcQ"

# 4. Search the knowledge base
mcptube search "main topic"

# 5. Ask a question (agentic retrieval over wiki)
mcptube ask "What are the key ideas discussed?"

# 6. View the table of contents
mcptube wiki toc

💡 Always wrap multi-word arguments in double quotes.


📖 CLI Reference

Library Management

CommandDescriptionExample
mcptube add "<url>"Ingest video + build wiki (full analysis)mcptube add "https://youtu.be/dQw4w9WgXcQ"
mcptube add "<url>" --text-onlyIngest without vision processingmcptube add "https://youtu.be/abc" --text-only
mcptube listList all videos with tagsmcptube list
mcptube info <query>Show full video details (transcript, chapters)mcptube info 1
mcptube remove <query>Remove video + clean wiki referencesmcptube remove 1

<query> can be a video index number, video ID, or partial title.


Wiki Knowledge Base

CommandDescriptionExample
mcptube wiki listBrowse all wiki pagesmcptube wiki list
mcptube wiki list --type <type>Filter by type: video, entity, topic, conceptmcptube wiki list --type concept
mcptube wiki list --tag <tag>Filter by tagmcptube wiki list --tag AI
mcptube wiki show <slug>Read a specific wiki page in fullmcptube wiki show "entity-openai"
mcptube wiki search "<query>"Full-text search across all wiki pagesmcptube wiki search "attention"
mcptube wiki tocTable of contents (all pages, compact)mcptube wiki toc
mcptube wiki history <slug>Version history for a wiki pagemcptube wiki history "topic-ml"
mcptube wiki exportExport all pages as markdown (default)mcptube wiki export -o wiki_export/
mcptube wiki export --format htmlExport all pages as single HTML filemcptube wiki export --format html -o wiki.html
mcptube wiki export --page <slug>Export a single pagemcptube wiki export --page "entity-openai" -o openai.md

Search & Ask

CommandDescriptionExample
mcptube search "<query>"Full-text search, returns page listmcptube search "transformers"
mcptube ask "<question>"Agentic Q&A over wiki (BYOK)mcptube ask "What is self-attention?"

Frames

CommandDescriptionExample
mcptube frame <query> <timestamp>Extract frame at exact timestamp (seconds)mcptube frame 1 30.5
mcptube frame-query <query> "<description>"Extract frame by transcript matchmcptube frame-query 1 "when they show the diagram"

Analysis & Reports (BYOK)

CommandDescriptionExample
mcptube classify <query>LLM classify + tag a videomcptube classify 1
mcptube report <query>Generate illustrated report for one videomcptube report 1
mcptube report <query> --focus "<topic>"Guide report with a focus querymcptube report 1 --focus "RLHF"
mcptube report <query> --format html -o <file>Save report as HTMLmcptube report 1 --format html -o report.html
mcptube report-query "<topic>"Cross-video report on a topicmcptube report-query "scaling laws"
mcptube report-query "<topic>" --tag <tag>Cross-video report filtered by tagmcptube report-query "AI" --tag research
mcptube report-query "<topic>" -o <file>Save cross-video reportmcptube report-query "AI" --format html -o report.html
mcptube synthesize-cmd "<topic>" -v <id> -v <id>Cross-video theme synthesismcptube synthesize-cmd "RLHF" -v id1 -v id2
mcptube synthesize-cmd "<topic>" -v <id> --format html -o <file>Save synthesis as HTMLmcptube synthesize-cmd "AI" -v id1 --format html -o out.html
mcptube discover "<topic>"Search YouTube, cluster results (no ingest)mcptube discover "prompt engineering"

Server

CommandDescription
mcptube serveStart MCP server over HTTP (default 127.0.0.1:9093)
mcptube serve --stdioStart MCP server over stdio (for Claude Desktop)
mcptube serve --host <host> --port <port>Custom host/port
mcptube serve --reloadHot-reload mode for development

🧩 Wiki Page Types

When you ingest a video, mcptube-vision builds four types of wiki pages:

Page TypeCreated FromUpdate Policy
VideoEach ingested videoWrite-once (immutable)
EntityPeople, companies, tools mentionedAppend-only (new references added)
TopicBroad themes (e.g., "Machine Learning")Synthesis rewritten, per-video contributions immutable
ConceptSpecific ideas (e.g., "Scaling Laws")Synthesis rewritten, per-video contributions immutable

Principle: Raw source content (what was said/shown in each video) is never modified. Only synthesis summaries evolve as new videos are added. Version history is maintained for all changes.


🔍 How Search Works (Hybrid Retrieval)

mcptube-vision uses a two-step hybrid approach:

  1. SQLite FTS5 — keyword search narrows thousands of wiki pages to a handful of candidates (milliseconds, zero LLM cost)
  2. LLM Agent — reads candidates + wiki table of contents, reasons about relevance, synthesizes an answer

This gives you the speed of keyword search with the intelligence of an LLM agent.


👁️ Vision Pipeline

When you ingest a video without --text-only, mcptube-vision:

  1. Extracts key frames using ffmpeg scene-change detection (select='gt(scene,0.4)')
  2. Sends frames to a vision-capable LLM (GPT-4o, Claude, Gemini) for description
  3. Combines frame descriptions with transcript in the knowledge extraction pass

This captures visual content (slides, code, diagrams, demos) that transcripts alone miss.


🔌 MCP Client Setup

mcptube exposes 25+ MCP tools via two transports:

TransportHow it worksUsed by
Streamable HTTP (/mcp)Client connects to a running mcptube serverVS Code, Claude Code, Cursor, Windsurf, Codex, Gemini CLI
stdioMCP client spawns mcptube as a child processClaude Desktop

ℹ️ The MCP server is currently available for local use only. You must run mcptube serve locally or let the client spawn it.


VS Code + GitHub Copilot ✅ Tested

Open Cmd+Shift+PMCP: Open User Configuration and add:

{
  "servers": {
    "mcptube": {
      "url": "http://127.0.0.1:9093/mcp"
    }
  }
}

Then start the server in a terminal:

mcptube serve

Claude Code ✅ Tested

claude mcp add mcptube --transport http http://127.0.0.1:9093/mcp

Then start the server in a separate terminal:

mcptube serve

Claude Desktop

Edit ~/Library/Application Support/Claude/claude_desktop_config.json (macOS) or %APPDATA%\Claude\claude_desktop_config.json (Windows):

If installed via pipx (recommended):

{
  "mcpServers": {
    "mcptube": {
      "command": "mcptube",
      "args": ["serve", "--stdio"]
    }
  }
}

If installed in a virtual environment:

{
  "mcpServers": {
    "mcptube": {
      "command": "/full/path/to/.venv/bin/mcptube",
      "args": ["serve", "--stdio"]
    }
  }
}

No separate server needed — Claude Desktop spawns the process automatically.


Cursor

Create or edit ~/.cursor/mcp.json (global) or .cursor/mcp.json (project-scoped):

{
  "mcpServers": {
    "mcptube": {
      "url": "http://127.0.0.1:9093/mcp"
    }
  }
}

Then start the server:

mcptube serve

Windsurf

Edit ~/.codeium/windsurf/mcp_config.json:

{
  "mcpServers": {
    "mcptube": {
      "serverUrl": "http://127.0.0.1:9093/mcp"
    }
  }
}

Then start the server:

mcptube serve

OpenAI Codex

Edit ~/.codex/config.toml:

[mcp_servers.mcptube]
url = "http://127.0.0.1:9093/mcp"

Then start the server:

mcptube serve

Gemini CLI

Edit ~/.gemini/settings.json:

{
  "mcpServers": {
    "mcptube": {
      "httpUrl": "http://127.0.0.1:9093/mcp"
    }
  }
}

Then start the server:

mcptube serve

Verify Connection

Once connected, ask your MCP client:

use mcptube. list all videos in my library

It should call the list_videos tool and return results.

MCP Tools

ToolDescription
add_videoIngest video + build wiki
list_videosList library
remove_videoRemove video + clean wiki
wiki_listBrowse wiki pages
wiki_showRead a wiki page
wiki_searchFull-text search
wiki_tocTable of contents
wiki_askAgentic Q&A
wiki_historyVersion history
get_frameExtract frame (inline image)
get_frame_by_queryFrame by transcript match
classify_videoGet metadata for classification
generate_reportGet data for single-video report
generate_report_from_queryGet data for cross-video report
synthesizeGet data for theme synthesis
discover_videosSearch YouTube
ask_videoSingle-video Q&A data
ask_videosMulti-video Q&A data

⚙️ Configuration

All settings can be overridden via environment variables prefixed with MCPTUBE_:

SettingDefaultEnv Var
Data directory~/.mcptubeMCPTUBE_DATA_DIR
Server host127.0.0.1MCPTUBE_HOST
Server port9093MCPTUBE_PORT
Default LLM modelgpt-4oMCPTUBE_DEFAULT_MODEL

BYOK API Keys

Set one or more to enable LLM features:

export ANTHROPIC_API_KEY="sk-ant-..."
export OPENAI_API_KEY="sk-..."
export GOOGLE_API_KEY="AI..."

Auto-detection priority: Anthropic → OpenAI → Google.


📁 Data Layout

~/.mcptube/
├── mcptube.db          # Video metadata (SQLite)
├── wiki.db             # FTS5 search index (SQLite)
├── wiki/
│   ├── video/          # Video pages (JSON)
│   ├── entity/         # Entity pages (JSON)
│   ├── topic/          # Topic pages (JSON)
│   ├── concept/        # Concept pages (JSON)
│   └── _history/       # Version history
└── frames/
    ├── <id>_<ts>.jpg   # Single extracted frames
    └── <id>_scenes/    # Scene-change frames + metadata

🧪 Development

git clone https://github.com/0xchamin/mcptube.git
cd mcptube
git checkout vision
python3.12 -m venv venv
source venv/bin/activate
pip install -e ".[dev]"
pytest

🗺️ Roadmap

  • Wiki knowledge engine (entities, topics, concepts)
  • Scene-change frame extraction + vision analysis
  • Hybrid retrieval (FTS5 + agentic)
  • CLI + MCP server
  • Playlist/series support
  • Web app with early access sign-up
  • Token-based payment integration

📄 License

MIT — see LICENSE for details.

Reviews

No reviews yet

Sign in to write a review