MCP Hub
Back to servers

booklib

Detects what your AI doesn't know about your project and fixes it. Post-training gap detection, runtime context injection via MCP.

GitHub
Stars
20
Forks
4
Updated
Apr 10, 2026
Validated
Apr 12, 2026

booklib

booklib

A context engineering tool for AI coding assistants.
Detects post-training knowledge gaps, resolves them automatically,
and delivers your team's decisions via MCP to Claude, Cursor, Copilot, and 10+ tools.

npm version downloads stars license CI

767 tests  ·  23 expert skills  ·  10 ecosystems  ·  11 languages  ·  14 AI tools


The Problem

Your AI writes code using knowledge from its training data. Your project uses libraries released after that cutoff. The result: hallucinated APIs, deprecated patterns, and code that doesn't compile.

Your team's decisions — use PaymentIntents, not Charges; always wrap API responses; never useEffect for data fetching — exist nowhere in the AI's training data. The code it generates is idiomatic React or idiomatic Node — just not idiomatic yours.

When we asked Claude to write code for botid (published 2026-03-03, post-training):

"I can't verify the botid package's actual API. I won't output code for a package whose API I can't verify. Guessing function names and signatures would likely give you broken code."

The Fix

BookLib detects every post-training API in your codebase and resolves the gaps automatically.

booklib analyze on vercel/ai-chatbot — 274 post-training APIs detected

Real output: booklib analyze on vercel/ai-chatbot — 82 dependencies, 274 post-training APIs across 158 files.

Without BookLib — AI uses AI SDK v5 patterns from training data:

import { OpenAIStream, StreamingTextResponse } from 'ai';  // removed in v6
import OpenAI from 'openai';

export async function POST(req: Request) {
  const response = await new OpenAI().chat.completions.create({
    model: 'gpt-4', stream: true, messages,
  });
  return new StreamingTextResponse(OpenAIStream(response));
}

With BookLib — AI gets v6 docs injected at runtime:

import { streamText, convertToModelMessages,
  createUIMessageStreamResponse } from 'ai';
import { openai } from '@ai-sdk/openai';

export async function POST(req: Request) {
  const result = streamText({
    model: openai('gpt-4o'),
    messages: convertToModelMessages(messages),
  });
  return createUIMessageStreamResponse({
    stream: result.toUIMessageStream(),
  });
}

Getting Started

Requires Node.js >= 18.

npm install -g @booklib/core
booklib init

The wizard detects your stack, configures MCP for your AI tools, and builds the knowledge index. Then see what your AI doesn't know:

booklib analyze

Website and skill browser at booklib-ai.github.io/booklib.


How It Works

Most context tools wait for your AI to ask. BookLib detects gaps before coding starts and injects corrections as code is written.

1. Detect Knowledge Gaps

Scans your dependencies across npm, PyPI, Maven, Crates.io, RubyGems, Go modules, Packagist, Pub, Swift, and NuGet. Checks publish dates against the model's training cutoff, then cross-references with your source code to find the exact files and APIs affected.

2. Resolve Automatically

For each gap, BookLib fetches current documentation:

  1. Context7 — instant, version-specific library docs
  2. GitHub — releases, wiki, and discussions
  3. Manual — suggests the right booklib connect command

3. Protect at Runtime

PreToolUse and PostToolUse hooks inject context as your AI writes code:

  • Runtime injection — 3-10 lines of relevant knowledge inserted before each edit, powered by a pre-computed context map
  • Import checking — flags unknown APIs not in the index (11 languages)
  • Contradiction detection — warns when code violates team decisions in real-time

4. Capture Team Knowledge

Your team's decisions live nowhere in public docs. BookLib auto-detects project documentation — specs, ADRs, architecture docs — and indexes them alongside your team decisions.

booklib capture --title "use PaymentIntents not Charges" --type decision
booklib connect notion database <db-id>
booklib connect github discussions org/repo

Features

FeatureDetails
Gap Detection10 package ecosystems, cross-referenced with source code
Runtime InjectionPre/PostToolUse hooks deliver context as AI writes code
Context MapMaps knowledge to code scopes via imports, terms, file patterns
Auto-ResolutionContext7 + GitHub + web connectors fetch current docs
Processing ModesFast (BM25), Local (Ollama), Cloud AI — choose in wizard
Import CheckingFlags unknown APIs in JS/TS, Python, Go, Rust, Java, Kotlin, Ruby, PHP, C#, Swift, Dart
Decision CheckingDetects when code contradicts captured team rules
Knowledge GraphNodes, typed edges, auto-linking, BFS traversal
Source ConnectorsGitHub, Notion, Context7, local files, web docs, SDD specs (.specify, .planning, .kiro)
Source DetectionAuto-detects 12 content types: OpenAPI, ADRs, Gherkin, project docs, and more
Hybrid SearchBM25 + vector search + Reciprocal Rank Fusion + cross-encoder reranking
23 Expert SkillsDistilled from Effective Java, Clean Code, DDD, and 20 more canonical books

Works With

booklib init detects your AI tools and configures MCP automatically.

Claude Code · Cursor · Copilot · Gemini CLI · Codex · Windsurf · Roo Code · Goose · Zed · Continue · OpenHands · Junie · OpenCode · Letta

10 tools via MCP, 14 total with instruction-file support. See AGENTS.md for per-tool setup.


CLI Reference

Setup

CommandDescription
booklib initGuided setup — detects stack, configures MCP, builds index
booklib indexRebuild the search index
booklib doctorHealth check for skills and config

Daily use

CommandDescription
booklib gapsFind post-training dependencies
booklib resolve-gapsAuto-fix gaps via Context7 and GitHub
booklib analyzeShow affected files and post-training APIs
booklib search "<query>"Search skills and knowledge

Knowledge

CommandDescription
booklib capture --title "<t>"Save a team decision or insight
booklib check-imports <file>Flag unknown APIs
booklib check-decisions <file>Check code against team rules

Sources

CommandDescription
booklib connect <path>Index local documentation
booklib connect github releases <repo>Index GitHub changelogs
booklib connect notion database <id>Index Notion pages
booklib sourcesList connected sources

Run booklib --help --all for the full list.


Architecture

Everything runs locally by default. Embeddings via HuggingFace Transformers (CoreML on macOS, CPU elsewhere), vector search via Vectra, lexical search via BM25, all persisted in .booklib/. Optional cloud modes (Ollama, Anthropic, OpenAI) for AI-powered reasoning.

BookLib complements code context tools:

LayerToolWhat it knows
DocumentationContext7Current library APIs
Code structurelsp-mcpFunctions, types, call graphs
KnowledgeBookLibPost-training gaps, team decisions, expert principles

Contributing

See CONTRIBUTING.md for the full guide.

MIT License | Issues | Ko-fi | Docs

English · 中文 · 日本語 · 한국어 · Português · Українська

Reviews

No reviews yet

Sign in to write a review