MCP Hub
Back to servers

jlab-mcp

Jupyter Lab MCP server for SLURM clusters — run GPU code on compute nodes from Claude Code

glama
Updated
Feb 8, 2026

jlab-mcp

A Model Context Protocol (MCP) server that enables Claude Code to execute Python code on GPU compute nodes via JupyterLab running on a SLURM cluster.

Inspired by and adapted from goodfire-ai/scribe, which provides notebook-based code execution for Claude. This project adapts that approach for HPC/SLURM environments where GPU resources are allocated via job schedulers.

Architecture

Claude Code (login node)
    ↕ stdio
MCP Server (login node)
    ↕ HTTP/WebSocket
JupyterLab (compute node, via sbatch)
    ↕
IPython Kernel (GPU access)

Login and compute nodes share a filesystem. The MCP server submits a SLURM job that starts JupyterLab on a compute node, then communicates with it over HTTP/WebSocket. Connection info (hostname, port, token) is exchanged via a file on the shared filesystem.

Setup

# Install (no git clone needed)
uv tool install git+https://github.com/kdkyum/jlab-mcp.git

The SLURM job activates .venv in the current working directory. Set up your project's venv on the shared filesystem with the compute dependencies:

cd /shared/fs/my-project
uv venv
uv pip install jupyterlab ipykernel matplotlib numpy
uv pip install torch --index-url https://download.pytorch.org/whl/cu126  # GPU support

Configuration

All settings are configurable via environment variables. No values are hardcoded for a specific cluster.

Environment VariableDefaultDescription
JLAB_MCP_DIR~/.jlab-mcpBase working directory
JLAB_MCP_NOTEBOOK_DIR~/.jlab-mcp/notebooksNotebook storage
JLAB_MCP_LOG_DIR~/.jlab-mcp/logsSLURM job logs
JLAB_MCP_CONNECTION_DIR~/.jlab-mcp/connectionsConnection info files
JLAB_MCP_SLURM_PARTITIONgpuSLURM partition
JLAB_MCP_SLURM_GRESgpu:1SLURM generic resource
JLAB_MCP_SLURM_CPUS4CPUs per task
JLAB_MCP_SLURM_MEM32000Memory in MB
JLAB_MCP_SLURM_TIME4:00:00Wall clock time limit
JLAB_MCP_SLURM_MODULES(empty)Space-separated modules to load (e.g. cuda/12.6)
JLAB_MCP_PORT_MIN18000Port range lower bound
JLAB_MCP_PORT_MAX19000Port range upper bound

Example: Cluster with A100 GPUs and CUDA module

export JLAB_MCP_SLURM_PARTITION=gpu1
export JLAB_MCP_SLURM_GRES=gpu:a100:1
export JLAB_MCP_SLURM_CPUS=18
export JLAB_MCP_SLURM_MEM=125000
export JLAB_MCP_SLURM_TIME=1-00:00:00
export JLAB_MCP_SLURM_MODULES="cuda/12.6"

Claude Code Integration

Add to ~/.claude.json or project .mcp.json:

{
  "mcpServers": {
    "jlab-mcp": {
      "command": "jlab-mcp",
      "env": {
        "JLAB_MCP_SLURM_PARTITION": "gpu1",
        "JLAB_MCP_SLURM_GRES": "gpu:a100:1",
        "JLAB_MCP_SLURM_MODULES": "cuda/12.6"
      }
    }
  }
}

The MCP server uses the working directory to find .venv for the compute node. Claude Code launches from your project directory, so it picks up the right venv automatically.

MCP Tools

ToolDescription
start_new_sessionSubmit SLURM job, start kernel, create empty notebook
start_session_resume_notebookResume existing notebook (re-executes all cells)
start_session_continue_notebookFork notebook with fresh kernel
execute_codeRun Python code, append cell to notebook
edit_cellEdit and re-execute a cell (supports negative indexing)
add_markdownAdd markdown cell to notebook
shutdown_sessionStop kernel, cancel SLURM job, clean up

Resource: jlab-mcp://server/status — returns active sessions and job states.

Testing

# Unit tests (no SLURM needed)
uv run python -m pytest tests/test_slurm.py tests/test_notebook.py tests/test_image_utils.py -v

# Integration tests (requires SLURM cluster)
uv run python -m pytest tests/test_tools.py -v -s --timeout=300

Acknowledgments

This project is inspired by goodfire-ai/scribe, which provides MCP-based notebook code execution for Claude. The tool interface design, image resizing approach, and notebook management patterns are adapted from scribe for use on HPC/SLURM clusters.

License

MIT

Reviews

No reviews yet

Sign in to write a review