MCP Hub
Back to servers

Job Listings MCP Server

A Python-based microservice that scrapes, deduplicates, and stores fresh job listings from multiple platforms. It enables users to access and filter a live feed of job data through a REST API for integration into portfolio sites and other applications.

Updated
Mar 2, 2026

MCP Server

A standalone Python microservice that scrapes fresh job listings using Jobspy, stores them in SQLite with deduplication, and exposes a /jobs REST endpoint for embedding in a portfolio site as a live feed.


Features

  • Multi-site scraping
  • Tiered role search
  • Smart deduplication
  • APScheduler
  • Query filtering
  • CORS-enabled
  • Deploy-ready

Architecture

APScheduler (1hr)  →  Jobspy Scraper  →  SQLite (deduped)  ←  FastAPI /jobs
                                                                    ↕
                                                          Portfolio Site (fetch)

Quick Start

1. Clone & Install

cd jobs-mcp-server
python -m venv venv
source venv/bin/activate  # Windows: venv\Scripts\activate
pip install -r requirements.txt

2. Configure

cp .env.example .env
# Edit .env as needed

3. Run

python main.py

The server starts at http://localhost:8000. An initial scrape runs automatically in the background.


API Endpoints

GET / — Health Check

{
  "status": "healthy",
  "service": "Job Listings MCP Server",
  "total_jobs_in_db": 142,
  "scrape_interval_hours": 1
}

GET /jobs — List Job Listings

Query Params:

ParamTypeDescription
locationstringFilter by location (substring, case-insensitive)
keywordstringFilter by keyword in job title
hoursintOnly jobs scraped within the last N hours
limitintMax results (default 100, max 500)
offsetintPagination offset

Example:

curl "http://localhost:8000/jobs?location=San%20Francisco&keyword=AI&hours=24"

Response:

{
  "count": 5,
  "filters": {
    "location": "San Francisco",
    "keyword": "AI",
    "hours": 24
  },
  "jobs": [
    {
      "id": 1,
      "job_title": "AI Solutions Engineer",
      "company": "Acme Corp",
      "location": "San Francisco, CA",
      "salary": "USD 120,000–160,000/yearly",
      "apply_link": "https://linkedin.com/jobs/...",
      "date_posted": "2025-01-15",
      "date_scraped": "2025-01-15T12:00:00+00:00",
      "source_site": "linkedin",
      "role_tier": "T2 — Secondary"
    }
  ]
}

POST /scrape — Manual Trigger

Triggers a scrape run in the background.

curl -X POST http://localhost:8000/scrape

GET /status — Last Scrape Status

curl http://localhost:8000/status

GET /roles — Configured Role Tiers

curl http://localhost:8000/roles

Deployment

Railway

  1. Fork the mcp-server repo to a new GitHub repo (or subdirectory).
  2. Connect Railway to the repo.
  3. Railway auto-detects the Dockerfile.
  4. Add a Volume at /data to persist the SQLite DB.
  5. Set environment variables in the Railway dashboard.

Render

  1. Create a new Web Service.
  2. Point to the repo/directory.
  3. Set Build Command: pip install -r requirements.txt
  4. Set Start Command: python main.py
  5. Add a Disk at /data and set DATA_DIR=/data.

🔗 Portfolio Integration

In your Next.js portfolio, fetch from the deployed URL:

// In a Next.js API route or client component
const API_URL = process.env.NEXT_PUBLIC_JOBS_API_URL || 'https://your-jobs-server.up.railway.app';

async function fetchJobs(filters?: { location?: string; keyword?: string; hours?: number }) {
  const params = new URLSearchParams();
  if (filters?.location) params.set('location', filters.location);
  if (filters?.keyword) params.set('keyword', filters.keyword);
  if (filters?.hours) params.set('hours', String(filters.hours));

  const res = await fetch(`${API_URL}/jobs?${params.toString()}`);
  return res.json();
}

License

MIT

Reviews

No reviews yet

Sign in to write a review