MCP Hub
Back to servers

reachy-claude-mcp

An MCP server that integrates the Reachy Mini robot (physical or simulated) into Claude Code, enabling physical feedback through emotions, animations, speech, and task tracking.

Tools
21
Updated
Jan 22, 2026

Reachy Claude MCP

MCP server that brings Reachy Mini to life as your coding companion in Claude Code.

Reachy reacts to your coding sessions with emotions, speech, and celebratory dances - making coding more interactive and fun!

Features

FeatureBasic+ LLM+ Memory
Robot emotions & animations
Text-to-speech (Piper TTS)
Session tracking (SQLite)
Smart sentiment analysis
AI-generated responses
Semantic problem search
Cross-project memory

Requirements

  • Python 3.10+
  • Reachy Mini robot or the simulation (see below)
  • Audio output (speakers/headphones)

Platform Support

PlatformBasicLLM (MLX)LLM (Ollama)Memory
macOS Apple Silicon
macOS Intel
Linux
Windows⚠️ Experimental

Quick Start

  1. Install the package:

    pip install reachy-claude-mcp
    
  2. Start Reachy Mini simulation (if you don't have the physical robot):

    # On macOS with Apple Silicon
    mjpython -m reachy_mini.daemon.app.main --sim --scene minimal
    
    # On other platforms
    python -m reachy_mini.daemon.app.main --sim --scene minimal
    
  3. Add to Claude Code (~/.mcp.json):

    {
      "mcpServers": {
        "reachy-claude": {
          "command": "reachy-claude"
        }
      }
    }
    
  4. Start Claude Code and Reachy will react to your coding!

  5. (Optional) Add instructions for Claude - Copy examples/CLAUDE.md to your project root or ~/projects/CLAUDE.md. This teaches Claude when and how to use Reachy's tools effectively.

Installation Options

Basic (robot + TTS only)

pip install reachy-claude-mcp

Without LLM features, Reachy uses keyword matching for sentiment - still works great!

With LLM (Smart Responses)

Option A: MLX (Apple Silicon only - fastest)

pip install "reachy-claude-mcp[llm]"

Option B: Ollama (cross-platform)

# Install Ollama from https://ollama.ai
ollama pull qwen2.5:1.5b

# Then just use the basic install - Ollama is auto-detected
pip install reachy-claude-mcp

The system automatically picks the best available backend: MLX → Ollama → keyword fallback.

Full Features (requires Qdrant)

pip install "reachy-claude-mcp[all]"

# Start Qdrant vector database
docker compose up -d

Development Install

git clone https://github.com/mchardysam/reachy-claude-mcp.git
cd reachy-claude-mcp

# Install with all features
pip install -e ".[all]"

# Or specific features
pip install -e ".[llm]"     # MLX sentiment analysis (Apple Silicon)
pip install -e ".[memory]"  # Qdrant vector store

Running Reachy Mini

No Robot? Use the Simulation!

You don't need a physical Reachy Mini to use this. The simulation works great:

# On macOS with Apple Silicon, use mjpython for the MuJoCo GUI
mjpython -m reachy_mini.daemon.app.main --sim --scene minimal

# On Linux/Windows/Intel Mac
python -m reachy_mini.daemon.app.main --sim --scene minimal

The simulation dashboard will be available at http://localhost:8000.

Physical Robot

Follow the Reachy Mini setup guide to connect to your physical robot.

Configuration

Environment Variables

VariableDefaultDescription
REACHY_CLAUDE_HOME~/.reachy-claudeData directory for database, memory, voice models
LLM Settings
REACHY_LLM_MODELmlx-community/Qwen2.5-1.5B-Instruct-4bitMLX model (Apple Silicon)
REACHY_OLLAMA_HOSThttp://localhost:11434Ollama server URL
REACHY_OLLAMA_MODELqwen2.5:1.5bOllama model name
Memory Settings
REACHY_QDRANT_HOSTlocalhostQdrant server host
REACHY_QDRANT_PORT6333Qdrant server port
Voice Settings
REACHY_VOICE_MODEL(auto-download)Path to custom Piper voice model

MCP Tools

Basic Interactions

ToolDescription
robot_respondSpeak a summary (1-2 sentences) + play emotion
robot_emotionPlay emotion animation only
robot_celebrateSuccess animation + excited speech
robot_thinkingThinking/processing animation
robot_wake_upStart-of-session greeting
robot_sleepEnd-of-session goodbye
robot_oopsError acknowledgment
robot_acknowledgeQuick nod without speaking

Dance Moves

ToolDescription
robot_dancePerform a dance move
robot_dance_respondDance while speaking
robot_big_celebrationMajor milestone celebration
robot_recoveredAfter fixing a tricky bug

Smart Features

ToolDescription
process_responseAuto-analyze output and react appropriately
get_project_greetingContext-aware greeting based on history
find_similar_problemSearch past solutions across projects
store_solutionSave problem-solution pairs for future
link_projectsMark relationships between projects

Utilities

ToolDescription
list_robot_emotionsList available emotions
list_robot_dancesList available dance moves
get_robot_statsMemory statistics across sessions
list_projectsAll projects Reachy remembers

Available Emotions

amazed, angry, anxious, attentive, bored, calm, celebrate, come, confused,
curious, default, disgusted, done, excited, exhausted, frustrated, go_away,
grateful, happy, helpful, inquiring, irritated, laugh, lonely, lost, loving,
neutral, no, oops, proud, relieved, sad, scared, serene, shy, sleep, success,
surprised, thinking, tired, uncertain, understanding, wake_up, welcoming, yes

Available Dances

Celebrations: celebrate, victory, playful, party Acknowledgments: nod, agree, listening, acknowledge Reactions: mind_blown, recovered, fixed_it, whoa Subtle: idle, processing, waiting, thinking_dance Expressive: peek, glance, sharp, funky, smooth, spiral

Usage Examples

Claude can call these tools during coding sessions:

# After completing a task
robot_respond(summary="Done! Fixed the type error.", emotion="happy")

# When celebrating a win
robot_celebrate(message="Tests are passing!")

# Big milestone
robot_big_celebration(message="All tests passing! Ship it!")

# When starting to think
robot_thinking()

# Session start
robot_wake_up(greeting="Good morning! Let's write some code!")

# Session end
robot_sleep(message="Great session! See you tomorrow.")

Architecture

src/reachy_claude_mcp/
├── server.py           # MCP server with tools
├── config.py           # Centralized configuration
├── robot_controller.py # Reachy Mini control
├── tts.py              # Piper TTS (cross-platform)
├── memory.py           # Session memory manager
├── database.py         # SQLite project tracking
├── vector_store.py     # Qdrant semantic search
├── llm_backends.py     # LLM backend abstraction (MLX, Ollama)
└── llm_analyzer.py     # Sentiment analysis and summarization

Troubleshooting

Voice model not found

The voice model auto-downloads on first use. If you have issues:

# Manual download
mkdir -p ~/.reachy-claude/voices
curl -L -o ~/.reachy-claude/voices/en_US-lessac-medium.onnx \
  https://huggingface.co/rhasspy/piper-voices/resolve/main/en/en_US/lessac/medium/en_US-lessac-medium.onnx
curl -L -o ~/.reachy-claude/voices/en_US-lessac-medium.onnx.json \
  https://huggingface.co/rhasspy/piper-voices/resolve/main/en/en_US/lessac/medium/en_US-lessac-medium.onnx.json

No audio on Linux

Install PulseAudio or ALSA utilities:

# Ubuntu/Debian
sudo apt install pulseaudio-utils

# Fedora
sudo dnf install pulseaudio-utils

LLM not working

Check which backend is available:

  • MLX: Only works on Apple Silicon Macs. Install with pip install "reachy-claude-mcp[llm]"
  • Ollama: Make sure Ollama is running (ollama serve) and you've pulled a model (ollama pull qwen2.5:1.5b)

If neither is available, the system falls back to keyword-based sentiment detection (still works, just less smart).

Qdrant connection failed

Make sure Qdrant is running:

docker compose up -d

Or point to a remote Qdrant instance:

export REACHY_QDRANT_HOST=your-qdrant-server.com

Simulation won't start

If mjpython isn't found, you may need to install MuJoCo separately or use regular Python:

# Try without mjpython
python -m reachy_mini.daemon.app.main --sim --scene minimal

On Linux, you may need to set MUJOCO_GL=egl or MUJOCO_GL=osmesa for headless rendering.

License

MIT

Reviews

No reviews yet

Sign in to write a review