MCP Hub
Back to servers

@muhammadmehdi/ollama-mcp-server

Requires Setup

A universal MCP server that provides a standardized interface to interact with local Ollama instances, enabling AI models to list, chat with, and manage local LLMs.

Tools
6
Updated
Jan 3, 2026
Validated
Jan 9, 2026

Quick Install

npx -y @muhammadmehdi/ollama-mcp-server

Ollama MCP Server

🌐 Compatible with ANY IDE or application that supports the Model Context Protocol (MCP)
Works seamlessly with Cursor IDE, Claude Desktop, Cline, and all MCP-compatible clients.

Version License Node.js MCP Compatible Cross-Platform

Universal MCP Server for Ollama - A production-ready Model Context Protocol (MCP) server that works with any IDE or application supporting MCP. This server provides a standardized interface to interact with local Ollama instances, enabling seamless integration across all MCP-compatible clients. Whether you use Cursor IDE, Claude Desktop, Cline, or any other MCP-compatible tool, this server works out of the box.

Table of Contents

Features

  • List Models: View all available Ollama models with detailed metadata
  • Get Model Info: Retrieve comprehensive information about specific models
  • Chat: Have conversations with Ollama models with support for system prompts and conversation context
  • Generate: Generate text from prompts with optional system prompts
  • Pull Models: Download models from Ollama's model registry
  • Delete Models: Remove models from your local Ollama installation
  • Streaming Support: Optional streaming responses for real-time interactions
  • Cross-Platform: Works on Windows, macOS, and Linux
  • TypeScript: Fully typed with TypeScript for better developer experience
  • MCP Protocol: Compliant with Model Context Protocol standards
  • Universal Compatibility: Works with ANY IDE or application that supports MCP - not limited to specific editors

Prerequisites

System Requirements

  • Node.js: Version 18.0.0 or higher
  • npm: Version 8.0.0 or higher (comes with Node.js)
  • Ollama: Installed and running locally
    • Default URL: http://localhost:11434
    • At least one Ollama model downloaded (e.g., ollama pull llama2)

Platform-Specific Requirements

Windows

  • Windows 10/11 (64-bit)
  • PowerShell 5.1+ or Windows Terminal
  • Node.js installed and available in PATH

macOS

  • macOS 10.15 (Catalina) or later
  • Terminal or iTerm2
  • Node.js installed via Homebrew or official installer

Linux

  • Ubuntu 20.04+, Debian 11+, or equivalent
  • Bash shell
  • Node.js installed via package manager or NodeSource repository

Verifying Prerequisites

Check if you have the required software installed:

# Check Node.js version
node --version  # Should be v18.0.0 or higher

# Check npm version
npm --version   # Should be 8.0.0 or higher

# Check if Ollama is running
curl http://localhost:11434/api/tags
# Or on Windows PowerShell:
# Invoke-WebRequest -Uri http://localhost:11434/api/tags

# Check if you have models installed
ollama list

Installation

Option 1: Install from npm (Recommended)

The easiest way to install @muhammadmehdi/ollama-mcp-server is via npm:

npm install -g @muhammadmehdi/ollama-mcp-server

💡 Universal Compatibility: This package works with any MCP-compatible IDE or application, not just specific editors. If your tool supports MCP, this server will work!

📖 Complete npm Package Guide: For a detailed step-by-step guide specifically for npm package users, see NPM_PACKAGE_GUIDE.md. This guide covers everything from installation to troubleshooting, perfect for users who want to use the package without dealing with source code.

This will install the package globally and make the ollama-mcp-server command available system-wide.

Note: After global installation, you can use the server directly in your MCP client configuration by pointing to the global installation path:

Windows:

{
  "mcpServers": {
    "ollama": {
      "command": "node",
      "args": ["C:/Users/<YourUsername>/AppData/Roaming/npm/node_modules/@muhammadmehdi/ollama-mcp-server/dist/index.js"],
      "env": {
        "OLLAMA_BASE_URL": "http://localhost:11434"
      }
    }
  }
}

macOS/Linux:

{
  "mcpServers": {
    "ollama": {
      "command": "node",
      "args": ["/usr/local/lib/node_modules/@muhammadmehdi/ollama-mcp-server/dist/index.js"],
      "env": {
        "OLLAMA_BASE_URL": "http://localhost:11434"
      }
    }
  }
}

Or use npx to find the global installation:

# Find the global path
npm list -g --depth=0 @muhammadmehdi/ollama-mcp-server

Option 2: Install Locally

For local installation in your project:

npm install @muhammadmehdi/ollama-mcp-server

Then reference it in your MCP configuration using the local node_modules path.

💡 Works Everywhere: This server uses the standard MCP protocol, so it works with any MCP-compatible IDE or application you configure it with.

Next Steps After Installation

⚡ Important: The MCP server runs automatically when your IDE/client starts it. You don't need to manually run any server commands! Just configure it once, and your IDE will handle starting/stopping it automatically.

After installing the package via npm, follow these steps to get it working with your IDE:

Step 1: Find the Installation Path

You need to locate where the package was installed to configure your MCP client.

For Global Installation:

# Windows PowerShell
npm root -g | ForEach-Object { Join-Path $_ "@muhammadmehdi\ollama-mcp-server\dist\index.js" }

# macOS/Linux
echo "$(npm root -g)/@muhammadmehdi/ollama-mcp-server/dist/index.js"

For Local Installation:

# The path will be in your project's node_modules
# Windows: .\node_modules\@muhammadmehdi\ollama-mcp-server\dist\index.js
# macOS/Linux: ./node_modules/@muhammadmehdi/ollama-mcp-server/dist/index.js

Step 2: Verify Ollama is Running

Before configuring, make sure Ollama is running:

# Check if Ollama is accessible
curl http://localhost:11434/api/tags

# Or on Windows PowerShell:
Invoke-WebRequest -Uri http://localhost:11434/api/tags

# Check installed models
ollama list

If Ollama isn't running, start it:

ollama serve

Step 3: Configure Your MCP Client

Now you need to add the server to your MCP client configuration. The exact steps depend on your IDE:

For Cursor IDE:

  1. Open Cursor Settings → Tools & Integrations → MCP Tools
  2. Add the server with the path from Step 1
  3. See Cursor IDE Configuration for detailed steps

For Claude Desktop:

  1. Edit the configuration file at:
    • Windows: %APPDATA%\Claude\claude_desktop_config.json
    • macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
    • Linux: ~/.config/Claude/claude_desktop_config.json
  2. Add the server configuration (see Claude Desktop Configuration)

For Other MCP Clients:

Add this configuration (replace the path with your actual path from Step 1):

{
  "mcpServers": {
    "ollama": {
      "command": "node",
      "args": ["<YOUR_PATH_FROM_STEP_1>"],
      "env": {
        "OLLAMA_BASE_URL": "http://localhost:11434"
      }
    }
  }
}

Step 4: Restart Your IDE/Client

Important: Completely close and restart your IDE or MCP client for the changes to take effect.

Step 5: Test the Setup

  1. Open your IDE's AI chat panel
  2. Try asking: "What Ollama models do I have installed?"
  3. The AI should use the list_models tool to show your models

How the Server Runs:

  • Automatic: Your MCP client (Cursor IDE, Claude Desktop, etc.) automatically starts the server when needed
  • No Manual Commands: You don't need to run npm start or any server commands
  • Background Process: The server runs in the background, managed by your IDE
  • Auto-Restart: If the server stops, your IDE will restart it automatically

If it works, you're all set! If not, check the Troubleshooting section.

Quick Reference

  • Installation: npm install -g @muhammadmehdi/ollama-mcp-server
  • Find Path: Use the commands in Step 1 above
  • Configure: Add to your MCP client's configuration file
  • Restart: Close and reopen your IDE/client
  • Test: Ask "What Ollama models do I have?"
  • Server Runs: Automatically by your IDE - no manual commands needed! ✅

💡 Need More Help? See the detailed Quick Start Guide or Troubleshooting section.

Common Questions About Running the Server

Q: Do I need to run the server manually?
A: No! Your IDE/client automatically starts and manages the server. Just configure it once, and your IDE handles everything.

Q: How do I start the server?
A: You don't need to. After configuration, your IDE starts it automatically when you use AI features. No npm start or terminal commands needed.

Q: Will the server keep running?
A: Yes, your IDE manages it. It runs in the background and restarts automatically if needed. You don't need to keep any terminals open.

Q: Can I run it manually for testing?
A: Yes, you can use npm start for debugging, but it's not needed for normal usage. Your IDE handles it automatically.

Q: What if the server stops?
A: Your IDE will automatically restart it. You don't need to do anything - it's all managed automatically.

Option 3: Install from Source

If you want to install from the source repository:

Step 1: Clone the Repository

git clone https://github.com/your-repo/ollama-mcp-server.git
cd ollama-mcp-server

Or download and extract the ZIP file, then navigate to the directory.

Step 2: Install Dependencies

npm install

This will install all required dependencies:

  • @modelcontextprotocol/sdk (^1.0.4) - MCP SDK
  • ollama (^0.5.7) - Ollama client library

Step 3: Build the Project

npm run build

This compiles TypeScript to JavaScript in the dist/ directory.

Step 4: Verify Installation

Test that the server can start:

npm start

Press Ctrl+C to stop the server. If it starts without errors, installation is successful.

Finding Installation Path

After installation, you can find the package location:

# For global installation
npm list -g --depth=0 ollama-mcp-server

# For local installation
npm list --depth=0 ollama-mcp-server

Configuration

Environment Variables

The server can be configured using environment variables:

VariableDescriptionDefaultRequired
OLLAMA_BASE_URLBase URL for your Ollama instancehttp://localhost:11434No

Setting Environment Variables

Windows (PowerShell)

# Temporary (current session only)
$env:OLLAMA_BASE_URL = "http://localhost:11434"

# Permanent (User-level)
[System.Environment]::SetEnvironmentVariable("OLLAMA_BASE_URL", "http://localhost:11434", "User")

Windows (CMD)

REM Temporary (current session only)
set OLLAMA_BASE_URL=http://localhost:11434

REM Permanent (User-level)
setx OLLAMA_BASE_URL "http://localhost:11434"

macOS / Linux (Bash/Zsh)

# Temporary (current session only)
export OLLAMA_BASE_URL=http://localhost:11434

# Permanent (add to ~/.bashrc or ~/.zshrc)
echo 'export OLLAMA_BASE_URL=http://localhost:11434' >> ~/.bashrc
source ~/.bashrc

Remote Ollama Configuration

If your Ollama instance is running on a different host or port:

# Example: Remote Ollama server
export OLLAMA_BASE_URL=http://192.168.1.100:11434

# Example: Custom port
export OLLAMA_BASE_URL=http://localhost:8080

Configuration in MCP Clients

Environment variables can also be set directly in MCP client configuration files (see MCP Client Configuration).

Usage

How the Server Runs

🎯 Key Point: After installing via npm and configuring your MCP client, you don't need to manually run the server. Your IDE/client (Cursor IDE, Claude Desktop, etc.) will automatically start and manage the server for you.

The server runs automatically when:

  • ✅ Your MCP client (IDE) starts up
  • ✅ You use AI features that require the Ollama MCP server
  • ✅ The server is configured in your MCP client settings

You don't need to:

  • ❌ Run npm start manually
  • ❌ Keep a terminal open
  • ❌ Start any server processes
  • ❌ Manage server lifecycle

The MCP client handles everything automatically!

Manual Server Commands (For Development/Testing Only)

These commands are only for development or debugging. Regular users don't need these:

Production Mode (Development Only)

npm start

Runs the compiled JavaScript from dist/index.js. Note: This is only for testing. In normal use, your IDE starts the server automatically.

Development Mode (Development Only)

npm run dev

Runs the TypeScript source directly using tsx for faster iteration during development.

Watch Mode (Development Only)

npm run watch

Watches for file changes and automatically rebuilds the project during development.

Testing the Server (For Debugging)

If you need to manually test the server (for troubleshooting):

# Start the server manually (for testing only)
npm start

# The server will wait for MCP protocol messages on stdio
# Press Ctrl+C to stop it

Remember: In normal usage, your MCP client handles starting the server automatically. You only need these commands for development or troubleshooting.

Quick Start Guide (After npm Installation)

This section explains how to use @muhammadmehdi/ollama-mcp-server after installing it from npm. Follow these steps to get started quickly.

🌐 Universal MCP Compatibility: This server works with any IDE or application that supports the Model Context Protocol (MCP). Whether you use Cursor IDE, Claude Desktop, Cline, or any other MCP-compatible tool, the setup process is the same.

Step 1: Install the Package

Choose one of these installation methods:

Global Installation (Recommended for most users):

npm install -g @muhammadmehdi/ollama-mcp-server

Local Installation (For project-specific use):

npm install @muhammadmehdi/ollama-mcp-server

Note: Scoped packages install in a folder structure like node_modules/@muhammadmehdi/ollama-mcp-server/

Step 2: Find the Installation Path

After installation, you need to find where the package was installed to configure your MCP client.

For Global Installation:

Windows (PowerShell):

# Method 1: Get npm global root
npm root -g
# Output example: C:\Users\YourUsername\AppData\Roaming\npm\node_modules

   # Method 2: Get exact package path
   npm list -g --depth=0 @muhammadmehdi/ollama-mcp-server
   # Or use:
   npm root -g | ForEach-Object { Join-Path $_ "@muhammadmehdi\ollama-mcp-server\dist\index.js" }

macOS/Linux:

# Method 1: Get npm global root
npm root -g
# Output example: /usr/local/lib/node_modules

   # Method 2: Get exact package path
   npm list -g --depth=0 @muhammadmehdi/ollama-mcp-server
   # Or construct the path:
   echo "$(npm root -g)/@muhammadmehdi/ollama-mcp-server/dist/index.js"

For Local Installation:

All Platforms:

# Get the local installation path
npm list --depth=0 @muhammadmehdi/ollama-mcp-server

# Or construct the path manually:
# Windows: .\node_modules\@muhammadmehdi\ollama-mcp-server\dist\index.js
# macOS/Linux: ./node_modules/@muhammadmehdi/ollama-mcp-server/dist/index.js

Quick Path Finder Script:

Windows PowerShell:

# Save this as find-ollama-mcp.ps1
$globalPath = npm root -g
$packagePath = Join-Path $globalPath "@muhammadmehdi\ollama-mcp-server\dist\index.js"
if (Test-Path $packagePath) {
    Write-Host "Global installation found at:"
    Write-Host $packagePath
} else {
    Write-Host "Package not found in global installation"
    Write-Host "Try local installation path:"
    Write-Host ".\node_modules\@muhammadmehdi\ollama-mcp-server\dist\index.js"
}

macOS/Linux:

# Save this as find-ollama-mcp.sh
#!/bin/bash
GLOBAL_PATH=$(npm root -g)
PACKAGE_PATH="$GLOBAL_PATH/@muhammadmehdi/ollama-mcp-server/dist/index.js"
if [ -f "$PACKAGE_PATH" ]; then
    echo "Global installation found at:"
    echo "$PACKAGE_PATH"
else
    echo "Package not found in global installation"
    echo "Try local installation path:"
    echo "./node_modules/@muhammadmehdi/ollama-mcp-server/dist/index.js"
fi

Step 3: Verify Ollama is Running

Before configuring your MCP client, ensure Ollama is running:

# Check if Ollama is accessible
curl http://localhost:11434/api/tags

# Or on Windows PowerShell:
Invoke-WebRequest -Uri http://localhost:11434/api/tags

# Check installed models
ollama list

Step 4: Configure Your MCP Client

Now configure your MCP client (Cursor IDE or Claude Desktop) to use the installed package. See the detailed configuration sections below.

Step 5: Test the Setup

After configuration:

  1. Restart your MCP client completely
  2. Open the AI chat panel
  3. Try asking: "What Ollama models do I have installed?"
  4. The AI should use the list_models tool to show your models

Common Installation Paths

Here are typical installation paths you might encounter:

Windows Global:

  • C:\Users\<YourUsername>\AppData\Roaming\npm\node_modules\@muhammadmehdi\ollama-mcp-server\dist\index.js
  • C:\Program Files\nodejs\node_modules\@muhammadmehdi\ollama-mcp-server\dist\index.js (if installed with admin)

macOS Global:

  • /usr/local/lib/node_modules/@muhammadmehdi/ollama-mcp-server/dist/index.js
  • /opt/homebrew/lib/node_modules/@muhammadmehdi/ollama-mcp-server/dist/index.js (Homebrew on Apple Silicon)

Linux Global:

  • /usr/lib/node_modules/@muhammadmehdi/ollama-mcp-server/dist/index.js
  • ~/.npm-global/lib/node_modules/@muhammadmehdi/ollama-mcp-server/dist/index.js (custom npm prefix)

Local Installation (All Platforms):

  • ./node_modules/@muhammadmehdi/ollama-mcp-server/dist/index.js (relative to your project)
  • Or use absolute path: C:\YourProject\node_modules\@muhammadmehdi\ollama-mcp-server\dist\index.js

MCP Client Configuration

Cursor IDE Configuration

Method 1: Through Settings UI (Recommended)

  1. Open Cursor Settings:

    • Click the gear icon (⚙️) in the top-right corner
    • Navigate to SettingsTools & IntegrationsMCP Tools
    • Click Add Custom MCP or Edit MCP Configuration
  2. Add Configuration: The configuration will be added to your MCP configuration file automatically.

Method 2: Manual Configuration

  1. Locate the MCP Configuration File:

    Windows:

    %APPDATA%\Cursor\User\globalStorage\mcp.json
    

    Or: C:\Users\<YourUsername>\AppData\Roaming\Cursor\User\globalStorage\mcp.json

    macOS:

    ~/Library/Application Support/Cursor/User/globalStorage/mcp.json
    

    Linux:

    ~/.config/Cursor/User/globalStorage/mcp.json
    
  2. Add Configuration:

    Replace /absolute/path/to/ollama-mcp-server/dist/index.js with the actual path from Step 2 above.

    For npm Global Installation:

    {
      "mcpServers": {
        "ollama": {
          "command": "node",
          "args": ["<PATH_FROM_STEP_2>/dist/index.js"],
          "env": {
            "OLLAMA_BASE_URL": "http://localhost:11434"
          }
        }
      }
    }
    

    For npm Local Installation:

    {
      "mcpServers": {
        "ollama": {
          "command": "node",
          "args": ["<PROJECT_PATH>/node_modules/ollama-mcp-server/dist/index.js"],
          "env": {
            "OLLAMA_BASE_URL": "http://localhost:11434"
          }
        }
      }
    }
    
  3. Get the Absolute Path:

    If installed via npm (Global):

    Windows PowerShell:

    # Get the path automatically
    $globalPath = npm root -g
    $packagePath = Join-Path $globalPath "ollama-mcp-server\dist\index.js"
    Write-Host $packagePath
    

    macOS / Linux:

    # Get the path automatically
    echo "$(npm root -g)/ollama-mcp-server/dist/index.js"
    

    If installed from source:

    Windows PowerShell:

    Resolve-Path "C:\StartUp\ollama-mcp-server\dist\index.js"
    

    macOS / Linux:

    realpath dist/index.js
    
  4. Complete Configuration Examples:

    Windows - npm Global Installation:

    {
      "mcpServers": {
        "ollama": {
          "command": "node",
          "args": ["C:/Users/YourUsername/AppData/Roaming/npm/node_modules/@muhammadmehdi/ollama-mcp-server/dist/index.js"],
          "env": {
            "OLLAMA_BASE_URL": "http://localhost:11434"
          }
        }
      }
    }
    

    macOS - npm Global Installation (Homebrew):

    {
      "mcpServers": {
        "ollama": {
          "command": "node",
          "args": ["/opt/homebrew/lib/node_modules/@muhammadmehdi/ollama-mcp-server/dist/index.js"],
          "env": {
            "OLLAMA_BASE_URL": "http://localhost:11434"
          }
        }
      }
    }
    

    Linux - npm Global Installation:

    {
      "mcpServers": {
        "ollama": {
          "command": "node",
          "args": ["/usr/lib/node_modules/@muhammadmehdi/ollama-mcp-server/dist/index.js"],
          "env": {
            "OLLAMA_BASE_URL": "http://localhost:11434"
          }
        }
      }
    }
    

    All Platforms - npm Local Installation:

    {
      "mcpServers": {
        "ollama": {
          "command": "node",
          "args": ["/absolute/path/to/your/project/node_modules/@muhammadmehdi/ollama-mcp-server/dist/index.js"],
          "env": {
            "OLLAMA_BASE_URL": "http://localhost:11434"
          }
        }
      }
    }
    

    🌐 Universal Compatibility: This configuration works with any MCP-compatible IDE or application. The same setup applies whether you use Cursor IDE, Claude Desktop, Cline, or any other MCP client.

  5. Using Full Node.js Path (if Node.js is not in PATH):

    Windows:

    {
      "mcpServers": {
        "ollama": {
          "command": "C:/Program Files/nodejs/node.exe",
          "args": ["C:/StartUp/ollama-mcp-server/dist/index.js"],
          "env": {
            "OLLAMA_BASE_URL": "http://localhost:11434"
          }
        }
      }
    }
    

    macOS:

    {
      "mcpServers": {
        "ollama": {
          "command": "/usr/local/bin/node",
          "args": ["/Users/username/ollama-mcp-server/dist/index.js"],
          "env": {
            "OLLAMA_BASE_URL": "http://localhost:11434"
          }
        }
      }
    }
    
  6. Restart Cursor completely to apply changes.

Claude Desktop Configuration

  1. Locate Configuration File:

    macOS:

    ~/Library/Application Support/Claude/claude_desktop_config.json
    

    Windows:

    %APPDATA%\Claude\claude_desktop_config.json
    

    Or: C:\Users\<YourUsername>\AppData\Roaming\Claude\claude_desktop_config.json

    Linux:

    ~/.config/Claude/claude_desktop_config.json
    
  2. Add Configuration:

    Use the same configuration format as Cursor IDE, with the path from Step 2 of the Quick Start Guide:

    For npm Global Installation:

    {
      "mcpServers": {
        "ollama": {
          "command": "node",
          "args": ["<PATH_FROM_QUICK_START_STEP_2>/dist/index.js"],
          "env": {
            "OLLAMA_BASE_URL": "http://localhost:11434"
          }
        }
      }
    }
    

    Example - Windows:

    {
      "mcpServers": {
        "ollama": {
          "command": "node",
          "args": ["C:/Users/YourUsername/AppData/Roaming/npm/node_modules/@muhammadmehdi/ollama-mcp-server/dist/index.js"],
          "env": {
            "OLLAMA_BASE_URL": "http://localhost:11434"
          }
        }
      }
    }
    

    Example - macOS:

    {
      "mcpServers": {
        "ollama": {
          "command": "node",
          "args": ["/opt/homebrew/lib/node_modules/@muhammadmehdi/ollama-mcp-server/dist/index.js"],
          "env": {
            "OLLAMA_BASE_URL": "http://localhost:11434"
          }
        }
      }
    }
    
  3. Restart Claude Desktop completely to apply changes.

Other MCP Clients

For other MCP-compatible clients, follow the same pattern:

{
  "mcpServers": {
    "ollama": {
      "command": "node",
      "args": ["/absolute/path/to/ollama-mcp-server/dist/index.js"],
      "env": {
        "OLLAMA_BASE_URL": "http://localhost:11434"
      }
    }
  }
}

Replace /absolute/path/to/ollama-mcp-server/dist/index.js with your actual path from Step 2 of the Quick Start Guide.

IDE Compatibility

🌐 Universal MCP Compatibility Tagline:
This server works with ANY IDE or application that supports the Model Context Protocol (MCP).
It's not limited to specific editors - if your tool supports MCP, this server will work!

✅ Fully Compatible IDEs/Clients

Your Ollama MCP server is compatible with any IDE or application that supports the Model Context Protocol (MCP). The server uses the standard MCP stdio transport, making it universally compatible with MCP clients. This means you're not locked into a specific IDE - use it with whatever MCP-compatible tool you prefer!

Confirmed Working:

  • Cursor IDE - Full support with native MCP integration
  • Claude Desktop - Full support with MCP configuration
  • Cline - MCP-compatible AI coding assistant

Should Work With:

  • Any IDE/client that implements the MCP protocol
  • Any application that can spawn processes and communicate via stdio
  • Any MCP-compatible tool or framework

How to Check if Your IDE Supports MCP

  1. Look for MCP settings: Check if your IDE has MCP configuration options
  2. Check documentation: Look for "Model Context Protocol" or "MCP" in your IDE's docs
  3. Configuration file: MCP clients typically use a JSON configuration file
  4. Community support: Check if others have successfully integrated MCP servers

Adding Support to Your IDE

If your IDE doesn't have built-in MCP support, you can:

  1. Request the feature: Ask your IDE developers to add MCP support
  2. Use a bridge: Some tools can bridge MCP servers to other protocols
  3. Use compatible tools: Use Cursor or Claude Desktop alongside your IDE

Protocol Details

  • Transport: stdio (standard input/output)
  • Protocol: Model Context Protocol (MCP) v1.0+
  • SDK: Uses official @modelcontextprotocol/sdk
  • Compatibility: Any MCP-compatible client

The server follows the MCP specification, ensuring maximum compatibility across different platforms and IDEs.

Using the Server After Installation

Once configured, you can use the Ollama MCP server through your MCP client (Cursor IDE, Claude Desktop, etc.). Here are practical examples:

Example 1: List Your Models

Simply ask your AI assistant:

"What Ollama models do I have installed?"

The AI will automatically use the list_models tool to show you all available models.

Example 2: Get Model Information

Ask about a specific model:

"Show me information about the llama2 model"

The AI will use get_model_info to retrieve detailed model information.

Example 3: Chat with a Model

Have a conversation:

"Chat with llama2 and ask it to explain quantum computing in simple terms"

The AI will use the chat tool to interact with your local model.

Example 4: Generate Code

Ask for code generation:

"Use Ollama to generate a Python function that calculates fibonacci numbers using the llama2 model"

The AI will use the generate tool to create code using your local model.

Example 5: Download a New Model

Request a model download:

"Download the mistral model using Ollama"

The AI will use pull_model to download the model.

Example 6: Manage Models

Delete a model:

"Delete the old llama2 model from Ollama"

The AI will use delete_model to remove it.

Tips for Best Experience

  1. Be Specific: Mention the model name when you want to use a specific one
  2. Check Models First: Ask "What models do I have?" before trying to use one
  3. Model Names: Use the exact model name with tag (e.g., "llama2:latest" not just "llama2")
  4. Large Models: Be patient when pulling large models - it may take time
  5. Restart Client: If tools aren't working, restart your MCP client

API Documentation

Tool: list_models

List all available Ollama models on your system.

Input Parameters: None

Output: JSON object containing an array of models with the following structure:

{
  "models": [
    {
      "name": "llama2:latest",
      "size": 3825819519,
      "modified_at": "2024-01-15T10:30:00.000Z",
      "digest": "sha256:abc123..."
    }
  ]
}

Example Usage in MCP Client:

User: "List all my Ollama models"
AI: [Uses list_models tool]

Response Fields:

  • name (string): Model name and tag
  • size (number): Model size in bytes
  • modified_at (string): ISO 8601 timestamp of last modification
  • digest (string): SHA256 digest of the model

Tool: get_model_info

Get detailed information about a specific model.

Input Parameters:

  • model (string, required): The name of the model (e.g., "llama2:latest")

Output: JSON object containing detailed model information including:

  • Model parameters
  • Template
  • System prompt
  • Modelfile
  • Model details

Example Usage in MCP Client:

User: "Show me information about llama2 model"
AI: [Uses get_model_info tool with model="llama2:latest"]

Error Handling:

  • Returns error if model name is not provided
  • Returns error if model does not exist

Tool: chat

Chat with an Ollama model with support for system prompts and conversation context.

Input Parameters:

  • model (string, required): The name of the model to use
  • message (string, required): The message to send to the model
  • system (string, optional): System prompt to set model behavior
  • context (array, optional): Previous conversation messages in format:
    [
      {
        "role": "user",
        "content": "Hello"
      },
      {
        "role": "assistant",
        "content": "Hi there!"
      }
    ]
    
  • stream (boolean, optional): Whether to stream the response (default: false)

Output: Text response from the model

Example Usage in MCP Client:

User: "Chat with llama2 and ask it to explain quantum computing"
AI: [Uses chat tool with model="llama2:latest", message="Explain quantum computing"]

Advanced Example with Context:

{
  "model": "llama2:latest",
  "message": "What was my previous question?",
  "system": "You are a helpful assistant.",
  "context": [
    {
      "role": "user",
      "content": "What is AI?"
    },
    {
      "role": "assistant",
      "content": "AI stands for Artificial Intelligence..."
    }
  ]
}

Tool: generate

Generate text from a prompt using an Ollama model.

Input Parameters:

  • model (string, required): The name of the model to use
  • prompt (string, required): The prompt to generate from
  • system (string, optional): System prompt
  • stream (boolean, optional): Whether to stream the response (default: false)

Output: Generated text

Example Usage in MCP Client:

User: "Generate a Python function to calculate fibonacci numbers using llama2"
AI: [Uses generate tool with model="llama2:latest", prompt="Write a Python function to calculate fibonacci numbers"]

Example with System Prompt:

{
  "model": "llama2:latest",
  "prompt": "Write a hello world program",
  "system": "You are a programming expert. Always provide clean, well-commented code."
}

Tool: pull_model

Download a model from Ollama's model registry.

Input Parameters:

  • model (string, required): The name of the model to pull (e.g., "llama2", "mistral:7b")

Output: Status messages from the pull operation, including progress updates

Example Usage in MCP Client:

User: "Download the llama2 model"
AI: [Uses pull_model tool with model="llama2"]

Note: Large models may take significant time to download. The operation streams progress updates.


Tool: delete_model

Delete a model from your local Ollama installation.

Input Parameters:

  • model (string, required): The name of the model to delete

Output: Confirmation message

Example Usage in MCP Client:

User: "Delete the llama2 model"
AI: [Uses delete_model tool with model="llama2:latest"]

Warning: This action is irreversible. The model will need to be pulled again if you want to use it later.


Development

Project Structure

ollama-mcp-server/
├── src/
│   └── index.ts          # Main server implementation
├── dist/                 # Compiled JavaScript (generated)
│   ├── index.js
│   ├── index.js.map
│   ├── index.d.ts
│   └── index.d.ts.map
├── node_modules/         # Dependencies (generated)
├── package.json          # Project configuration and dependencies
├── package-lock.json     # Locked dependency versions
├── tsconfig.json         # TypeScript configuration
├── README.md             # This file
├── CURSOR_SETUP.md       # Detailed Cursor setup guide
├── cursor-mcp-config.json # Example Cursor configuration
├── example-config.json    # Example MCP configuration
├── get-path.ps1          # Windows PowerShell helper script
└── .gitignore            # Git ignore rules

TypeScript Configuration

The project uses TypeScript with the following key settings:

  • Target: ES2022
  • Module: ES2022
  • Strict Mode: Enabled
  • Source Maps: Enabled for debugging
  • Declaration Files: Generated for type definitions

See tsconfig.json for full configuration.

Available Scripts

ScriptDescription
npm run buildCompile TypeScript to JavaScript
npm startRun the compiled server
npm run devRun the server in development mode with tsx (no build required)
npm run watchWatch for changes and automatically rebuild

Development Workflow

  1. Make Changes: Edit files in src/
  2. Build: Run npm run build or use npm run watch for automatic rebuilding
  3. Test: Test with your MCP client
  4. Debug: Use source maps in dist/ for debugging

Code Style

  • Follow TypeScript best practices
  • Use strict typing
  • Maintain consistent formatting
  • Add comments for complex logic

Building for Production

# Clean previous build
rm -rf dist/

# Build
npm run build

# Verify build
ls dist/

Troubleshooting

Server Won't Start

Symptoms: Server fails to start or exits immediately

Solutions:

  1. Check Node.js Version:

    node --version  # Should be 18.0.0 or higher
    
  2. Verify Dependencies:

    npm install
    
  3. Check Build:

    npm run build
    ls dist/index.js  # Should exist
    
  4. Check Ollama Connection:

    curl http://localhost:11434/api/tags
    # Windows PowerShell:
    # Invoke-WebRequest -Uri http://localhost:11434/api/tags
    
  5. Verify Environment Variables:

    # Windows PowerShell:
    echo $env:OLLAMA_BASE_URL
    
    # macOS/Linux:
    echo $OLLAMA_BASE_URL
    

No Models Available

Symptoms: list_models returns empty array or error

Solutions:

  1. Pull a Model:

    ollama pull llama2
    
  2. Verify Models:

    ollama list
    
  3. Check Ollama Service:

    # Start Ollama if not running
    ollama serve
    

Connection Errors

Symptoms: "Connection refused", "ECONNREFUSED", or timeout errors

Solutions:

  1. Verify Ollama is Running:

    # Check if Ollama is accessible
    curl http://localhost:11434/api/tags
    
  2. Check Firewall Settings:

    • Ensure port 11434 is not blocked
    • For remote Ollama, check firewall rules
  3. Verify OLLAMA_BASE_URL:

    # Check current value
    echo $OLLAMA_BASE_URL  # or $env:OLLAMA_BASE_URL on Windows
    
    # Update if needed
    export OLLAMA_BASE_URL=http://localhost:11434
    
  4. Test Remote Connection:

    curl http://your-ollama-host:11434/api/tags
    

MCP Client Not Connecting

Symptoms: MCP client shows server as disconnected or tools unavailable

Solutions:

  1. Verify Configuration Path:

    • Use absolute paths (not relative)
    • Ensure path uses correct slashes for your OS
    • Verify file exists at specified path
  2. Check Node.js in PATH:

    which node  # macOS/Linux
    where.exe node  # Windows
    
  3. Use Full Node.js Path: Update MCP config to use full path to Node.js executable

  4. Check Cursor/Claude Logs:

    • Cursor: Help → Toggle Developer Tools → Console
    • Look for MCP-related errors
  5. Restart Client:

    • Completely close and restart Cursor/Claude Desktop
    • MCP servers are loaded on startup

Path Issues (Windows)

Symptoms: "Cannot find module" or path-related errors

Solutions:

  1. Use Forward Slashes or Escaped Backslashes:

    "args": ["C:/StartUp/ollama-mcp-server/dist/index.js"]
    // OR
    "args": ["C:\\StartUp\\ollama-mcp-server\\dist\\index.js"]
    
  2. Use Absolute Paths:

    • Never use relative paths like ./dist/index.js
    • Always use full path from drive letter
  3. Verify File Exists:

    Test-Path "C:\StartUp\ollama-mcp-server\dist\index.js"
    

Model Not Found Errors

Symptoms: "model not found" when using chat/generate tools

Solutions:

  1. List Available Models: Use list_models tool to see installed models

  2. Use Correct Model Name:

    • Include tag: llama2:latest not just llama2
    • Check exact name from ollama list
  3. Pull Missing Model:

    ollama pull model-name
    

Performance Issues

Symptoms: Slow responses or timeouts

Solutions:

  1. Use Smaller Models: Larger models require more resources
  2. Check System Resources: Ensure adequate RAM and CPU
  3. Disable Streaming: Set stream: false for faster responses
  4. Check Ollama Performance: Test Ollama directly with ollama run

Security Considerations

Environment Variables

  • Never commit sensitive data: Don't include API keys or passwords in configuration files
  • Use secure defaults: Default to localhost to prevent accidental exposure
  • Validate URLs: Ensure OLLAMA_BASE_URL points to trusted instances

Network Security

  • Localhost by Default: Server defaults to http://localhost:11434 for security
  • Remote Instances: Only connect to trusted Ollama instances
  • HTTPS: Consider using HTTPS for remote Ollama instances (if supported)

Model Security

  • Verify Models: Only pull models from trusted sources
  • Model Isolation: Be aware that models can execute code or access system resources
  • Input Validation: The server validates all inputs, but be cautious with user-provided prompts

Best Practices

  1. Keep dependencies updated: npm audit and npm update
  2. Use environment variables for configuration
  3. Don't expose the server directly to the internet
  4. Review model outputs before using in production

Performance Tips

Model Selection

  • Use Appropriate Models: Smaller models (3B-7B) are faster for simple tasks
  • Model Caching: Ollama caches models in memory after first use
  • Quantized Models: Use quantized models (Q4, Q5) for better performance

Optimization Strategies

  1. Disable Streaming: Set stream: false for faster single responses
  2. Batch Operations: Group multiple operations when possible
  3. Connection Pooling: The Ollama client handles connections efficiently
  4. Resource Monitoring: Monitor CPU and RAM usage during operations

System Resources

  • RAM: Ensure adequate RAM for model size (7B models need ~8GB+)
  • CPU: Multi-core CPUs improve performance
  • GPU: Ollama can use GPU acceleration if available

Contributing

Contributions are welcome! Please follow these guidelines:

Getting Started

  1. Fork the repository
  2. Create a feature branch: git checkout -b feature/amazing-feature
  3. Make your changes
  4. Test thoroughly
  5. Commit with clear messages: git commit -m "Add amazing feature"
  6. Push to your branch: git push origin feature/amazing-feature
  7. Open a Pull Request

Development Guidelines

  • Follow existing code style and patterns
  • Add TypeScript types for all new code
  • Update documentation for new features
  • Add error handling for edge cases
  • Test with multiple MCP clients if possible

Pull Request Checklist

  • Code follows project style guidelines
  • All tests pass (if applicable)
  • Documentation updated
  • No console errors or warnings
  • Tested with at least one MCP client
  • Commit messages are clear and descriptive

Reporting Issues

When reporting issues, please include:

  • Operating system and version
  • Node.js version (node --version)
  • Ollama version (ollama --version)
  • Steps to reproduce
  • Expected vs actual behavior
  • Error messages (if any)
  • MCP client being used

License

This project is licensed under the MIT License - see the LICENSE file for details.

Support

Getting Help

Resources

Quick Links


Made with ❤️ for the MCP community

For detailed Cursor IDE setup instructions, see CURSOR_SETUP.md.

Reviews

No reviews yet

Sign in to write a review