🤖 Promptbook MCP
Your personal cookbook for AI prompts with RAG-powered semantic search
✨ What is this?
Promptbook MCP is a plug-and-play server that helps developers who use AI coding assistants (like GitHub Copilot, Claude, etc.) to:
- 📚 Store prompts from your AI sessions automatically
- 🔍 Search prompts by meaning, not just keywords (RAG-powered)
- 🤖 Access your prompt library from any MCP-compatible tool
- 📊 Organize prompts by category (refactoring, testing, debugging, etc.)
Perfect for: Developers who reuse AI prompts and want a searchable knowledge base.
🚀 Quick Start
Get running in 30 seconds:
Option 1: Automated Setup (Recommended)
git clone https://github.com/isaacpalomero/promptbook-mcp.git
cd promptbook-mcp
./setup.sh
That's it! 🎉
Option 2: Docker
git clone https://github.com/isaacpalomero/promptbook-mcp.git
cd promptbook-mcp
docker-compose up -d
Done! Your server is running.
💡 Use Cases
Problem: You asked ChatGPT/Copilot the perfect prompt for refactoring last week. Now you can't find it.
Solution: Promptbook MCP auto-saves and indexes all your prompts.
# Later, search by meaning
search_prompts("refactor typescript to use dependency injection")
→ Returns your exact prompt from last week
Real Examples
- Refactoring patterns - Store your best "clean code" prompts
- Testing strategies - Find that perfect test structure prompt
- Debugging workflows - Access proven debugging prompts
- Code review - Reuse comprehensive review prompts
📦 Installation
Prerequisites
- Python 3.9+ OR Docker
- 2GB RAM minimum
- macOS, Linux, or Windows
Detailed Setup
Automated Setup (Recommended)
# Clone repository
git clone https://github.com/isaacpalomero/promptbook-mcp.git
cd promptbook-mcp
# Run setup script
chmod +x setup.sh
./setup.sh
# Activate virtual environment
source .venv/bin/activate # Windows: .venv\Scripts\activate
# Start server
python mcp_server.py
Docker Method
# Clone repository
git clone https://github.com/isaacpalomero/promptbook-mcp.git
cd promptbook-mcp
# Copy environment file
cp .env.example .env
# Start services
docker-compose up -d
# Verify
docker-compose logs
Manual Setup
# Create virtual environment
python3 -m venv .venv
source .venv/bin/activate # Windows: .venv\Scripts\activate
# Install dependencies
pip install -r requirements.txt
# Create directories
mkdir -p prompts sessions
# Index existing prompts (if any)
python prompt_rag.py --index
# Start server
python mcp_server.py
🎯 Features
1. Semantic Search (RAG)
Find prompts by meaning, not exact words:
search_prompts("how to add unit tests")
→ Finds prompts about "testing", "jest", "pytest", etc.
2. Auto-Organization
Drop AI session files → Auto-categorized and indexed:
sessions/
└── copilot-session-abc123.md → Auto-processed into:
├── prompts/refactoring/prompt1.md
├── prompts/testing/prompt2.md
└── Updated RAG index
3. Multi-Provider Embeddings
Choose your embedding backend:
- Sentence-Transformers (default, local, CPU)
- LMStudio (GPU-accelerated, better quality)
# Use local embeddings (default)
EMBEDDING_PROVIDER=sentence-transformer
# Or use LMStudio
EMBEDDING_PROVIDER=lmstudio
LMSTUDIO_URL=http://localhost:1234
4. MCP Tools (13 Available)
Access via any MCP client:
| Tool | Description |
|---|---|
search_prompts | Semantic search by meaning |
create_prompt | Add new prompt directly |
update_prompt | Modify existing prompt |
delete_prompt | Remove prompt safely |
get_prompt_by_file | Get full content |
list_prompts_by_category | Browse by category |
find_similar_prompts | Find related prompts |
get_library_stats | View statistics |
index_prompts | Rebuild search index |
organize_session | Process AI session file |
get_prompt_index | View full metadata index |
Available categories:
refactoringtestingdebuggingimplementationdocumentationcode-reviewgeneral
🔌 MCP Client Setup
Claude Desktop
-
Open Claude config file:
# macOS ~/Library/Application Support/Claude/claude_desktop_config.json # Windows %APPDATA%\Claude\claude_desktop_config.json -
Add Promptbook MCP server:
{ "mcpServers": { "promptbook": { "command": "python", "args": ["/path/to/promptbook-mcp/mcp_server.py"] } } } -
Restart Claude Desktop
Other MCP Clients
Any MCP-compatible client can connect using the same pattern. See MCP Protocol docs for details.
📖 Documentation
- Setup Guide - Detailed installation steps
- Deployment Options - Docker, local, and production setups
- Embeddings Guide - Configure RAG providers
- Contributing - How to contribute
- Changelog - Version history
👨💻 For Developers
⚙️ Configuration
All runtime settings are centralized in config.py and exposed through an immutable Config dataclass. The server loads environment variables once at startup.
Environment Variables
| Variable | Description | Default |
|---|---|---|
PROMPTS_DIR | Root folder for categorized prompts | ./prompts |
SESSIONS_DIR | Directory watched for exported sessions | ./sessions |
VECTOR_DB_DIR | Persistent ChromaDB path | <PROMPTS_DIR>/.vectordb |
EMBEDDING_PROVIDER | sentence-transformer or lmstudio | sentence-transformer |
EMBEDDING_MODEL | Sentence Transformers model name | all-MiniLM-L6-v2 |
LMSTUDIO_URL / LMSTUDIO_MODEL | LMStudio endpoint + model | http://localhost:1234 / nomic-embed-text |
LMSTUDIO_DIMENSION | Expected LMStudio embedding size | 768 |
CHUNK_SIZE / CHUNK_OVERLAP | Prompt chunking parameters | 500 / 100 |
ENABLE_RAG | Toggle RAG initialization | true |
AUTO_REINDEX_INTERVAL | Seconds between auto-index checks | 30 |
LOG_LEVEL | Python logging level | INFO |
Configuration file:
# Copy example
cp .env.example .env
# Edit settings
vim .env
Access config anywhere in code:
from config import CONFIG
print(CONFIG.prompts_dir) # Validated Path object
print(CONFIG.embedding_provider) # Type-safe enum
🧪 Testing & Quality
We enforce strict quality gates:
# Run all tests
pytest
# Run with coverage
pytest --cov=. --cov-report=html
# Run only unit tests
pytest tests/unit/
# Run only integration tests
pytest tests/integration/
# Style check
flake8 --max-line-length=100
# Type check
mypy --strict mcp_server.py prompt_rag.py prompt_organizer.py
Quality Standards
- Test Coverage: Minimum 80%
- Type Safety:
mypy --strictmust pass - Code Style: Flake8 compliant
- CI Pipeline: All checks run on Python 3.9-3.12
A GitHub Actions workflow (.github/workflows/ci.yml) runs these checks automatically.
🐳 Docker Advanced
Multi-Stage Build
The Dockerfile uses a multi-stage build for optimized image size:
# Stage 1: Builder (installs dependencies)
FROM python:3.11-slim as builder
# Stage 2: Runtime (slim final image)
FROM python:3.11-slim
COPY --from=builder /app/.venv /app/.venv
Result: Final image < 800 MB
Health Checks
Docker includes automatic health monitoring:
HEALTHCHECK --interval=30s --timeout=10s --start-period=5s --retries=3 \
CMD python -m prompt_rag --health || exit 1
Volume Mounts
Persist data outside containers:
volumes:
- ./prompts:/app/prompts # Prompt storage
- ./sessions:/app/sessions # Session import
- ./prompts/.vectordb:/app/prompts/.vectordb # RAG database
Building & Running
# Build image
docker build -t promptbook-mcp:latest .
# Run container
docker run --rm -i \
-v "$(pwd)/prompts:/app/prompts" \
-v "$(pwd)/sessions:/app/sessions" \
-e EMBEDDING_PROVIDER=sentence-transformer \
promptbook-mcp:latest
🏗️ Architecture
Components
┌─────────────────────────────────────┐
│ MCP Client (Claude) │
└──────────────┬──────────────────────┘
│ MCP Protocol
┌──────────────▼──────────────────────┐
│ mcp_server.py │
│ - 13 MCP tools │
│ - Request routing │
│ - Error handling │
└──────────────┬──────────────────────┘
│
┌───────┴────────┐
│ │
┌──────▼──────┐ ┌─────▼────────┐
│prompt_rag.py│ │prompt_org.py │
│- RAG search │ │- Session │
│- Embeddings │ │ parsing │
│- ChromaDB │ │- Auto-org │
└─────────────┘ └──────────────┘
│ │
└───────┬────────┘
│
┌──────────────▼──────────────────────┐
│ prompts/ │
│ ├── refactoring/ │
│ ├── testing/ │
│ ├── debugging/ │
│ └── .vectordb/ │
└─────────────────────────────────────┘
Data Flow
- User asks Claude to search prompts
- Claude sends MCP request to
mcp_server.py - Server calls
prompt_rag.pyfor semantic search - RAG queries ChromaDB vector database
- Results returned to Claude with metadata
- User sees relevant prompts instantly
🤝 Contributing
We love contributions! 🎉
Quick Start
- Fork the repository
- Create feature branch:
git checkout -b feature/amazing-feature - Make changes and add tests
- Run tests:
pytest - Ensure quality:
flake8 && mypy --strict - Commit:
git commit -m 'feat: add amazing feature' - Push:
git push origin feature/amazing-feature - Open Pull Request
See CONTRIBUTING.md for detailed guidelines.
Commit Convention
We follow Conventional Commits:
feat:New featurefix:Bug fixdocs:Documentation onlystyle:Code style changesrefactor:Code refactoringtest:Test changeschore:Build/tooling changes
📄 License
This project is licensed under the MIT License - see LICENSE file for details.
🙏 Acknowledgments
- Built with MCP Protocol
- Powered by ChromaDB and Sentence-Transformers
- Inspired by the need for better prompt management in AI-assisted development
📞 Support
Made with ❤️ for the AI development community