Personal Productivity MCP Server
A multi-domain Model Context Protocol (MCP) server providing Claude with tools to manage different aspects of personal productivity.
Overview
This MCP server provides tools across three domains:
- Career (Phase 1 - Current): Job search automation, application tracking, resume analysis
- Fitness (Phase 3 - Planned): Health tracking, workout management, progress analysis
- Family (Phase 4 - Planned): Kids billing, expense tracking, Math Academy monitoring
Current Features (Phase 1)
Career Domain Tools
Read Tools:
career_search_greenhouse- Search jobs on Greenhouse job boardscareer_search_lever- Search jobs on Lever job boardscareer_get_applications- Get tracked applications with optional status filter
Write Tools:
career_track_application- Track a new job applicationcareer_update_application- Update application status and notes
Analysis Tools:
career_analyze_pipeline- Get statistics about your application pipeline
Quick Start
Prerequisites
- Docker and Docker Compose
- Claude Desktop or compatible MCP client
Setup
- Clone the repository:
cd personal-productivity-mcp
- Copy the environment template:
cp .env.example .env
- Build and run with Docker:
docker-compose up --build
Connecting to Claude Desktop
Add to your Claude Desktop configuration (~/Library/Application Support/Claude/claude_desktop_config.json on macOS):
{
"mcpServers": {
"personal-productivity": {
"command": "docker",
"args": [
"exec",
"-i",
"personal-productivity-mcp",
"python",
"-m",
"mcp_server.server"
]
}
}
}
Or for local development without Docker:
{
"mcpServers": {
"personal-productivity": {
"command": "python",
"args": ["-m", "mcp_server.server"],
"cwd": "/path/to/personal-productivity-mcp",
"env": {
"PYTHONPATH": "/path/to/personal-productivity-mcp/src"
}
}
}
}
Usage Examples
Searching for Jobs
Search for software engineering jobs at Anthropic
Claude will use the career_search_greenhouse and career_search_lever tools to find available positions.
Tracking Applications
Track a new application:
- Company: Anthropic
- Title: Senior Software Engineer
- URL: https://jobs.lever.co/anthropic/example
- Status: applied
Viewing Your Pipeline
Show me my application pipeline statistics
Project Structure
personal-productivity-mcp/
├── src/
│ └── mcp_server/
│ ├── __init__.py
│ ├── server.py # Main MCP server
│ ├── models.py # Pydantic data models
│ ├── config.py # Configuration management
│ ├── domains/
│ │ └── career/
│ │ ├── read.py # Read tools
│ │ ├── write.py # Write tools
│ │ ├── analysis.py # Analysis tools
│ │ └── scrapers/
│ │ ├── greenhouse.py
│ │ └── lever.py
│ └── utils/
│ ├── storage.py # Database utilities
│ ├── logging.py # Logging setup
│ └── cache.py # Caching utilities
├── data/ # SQLite database storage
├── Dockerfile
├── docker-compose.yml
├── pyproject.toml
└── README.md
Development
Local Development Setup
- Create a virtual environment:
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
- Install dependencies:
pip install -e ".[dev]"
- Install Playwright browsers:
playwright install chromium
- Run the server:
python -m mcp_server.server
Running Tests
pytest
Code Formatting
black src/
ruff check src/
Configuration
Environment variables (see .env.example):
MCP_SERVER_PORT- Server port (default: 3000)LOG_LEVEL- Logging level (default: INFO)DATABASE_PATH- Path to SQLite databaseCACHE_TTL_JOBS- Job listing cache TTL in seconds (default: 3600)PLAYWRIGHT_HEADLESS- Run Playwright in headless mode (default: true)
Roadmap
- Phase 1: Basic career domain with Greenhouse/Lever scrapers
- Phase 2: Complete career domain with resume analysis
- Phase 3: Fitness domain with health tracking
- Phase 4: Family domain with billing automation
- Phase 5: Polish and monitoring
Architecture Decisions
Why Docker?
- Avoids local Python environment issues (especially with Playwright)
- Consistent deployment across different machines
- Easy to run in home lab environment
Why SQLite?
- Simple file-based storage, no separate database server needed
- Perfect for single-user personal productivity use case
- Easy backups (just copy the database file)
Why MCP?
- Native integration with Claude
- Structured tool interface with validation
- Async-first design for scraping and API calls
Contributing
This is a personal project, but feel free to fork and adapt for your own needs!
License
MIT