MCP Hub
Back to servers

Snip

Screenshot and diagram tool for AI agents. Capture and annotate screenshots to show Claude what you mean — or let the agent render Mermaid diagrams and open them for visual review. Approve, annotate, or request changes with text feedback. Built-in review mode with structured responses. CLI and MCP server for Claude Code, Cursor, Windsurf, Cline. macOS, open source, free.

glama
Stars
33
Forks
2
Updated
Mar 21, 2026
Validated
Mar 22, 2026

Snip

Snip on Product Hunt

Snip

snipit.dev

Visual communication layer between humans and AI agents for macOS.

Capture and annotate screenshots, render diagrams from code, review agent-generated visuals with approve/request-changes flow — all from the menu bar. AI organizes and indexes everything for semantic search. CLI and MCP integration let any AI agent use Snip as their visual I/O.

Install

brew install --cask rixinhahaha/snip/snip

Or download the DMG directly from Releases (Apple Silicon only).

Quick Start (Development)

npm install
npm run rebuild   # compile native modules
npm start         # launch (tray icon appears in menu bar)

Requires macOS 14+, Node.js 18+, and Xcode CLT (xcode-select --install). macOS 26+ recommended for native Liquid Glass effects.

For AI-powered organization, install Ollama separately. Snip detects your system Ollama and guides you through setup in Settings.

How It Works

  1. Cmd+Shift+2 — Fullscreen overlay appears on whichever display the cursor is on, drag to select a region
  2. Annotate — Rectangle, arrow, text, tag, blur brush, or AI segment tools
  3. Esc — Copies annotated screenshot to clipboard
  4. Cmd+S — Saves to disk + AI organizes in background

Screenshots saved to ~/Documents/snip/screenshots/. AI renames, categorizes, and indexes them for search.

Agent Integration (CLI & MCP)

Snip exposes a CLI and MCP server so AI agents can use it as their visual I/O:

# Render a Mermaid diagram and open for review
echo 'graph LR; A-->B-->C' | snip render --format mermaid --message "Does this flow look right?"

# Open an image for agent review
snip open screenshot.png --message "Is the layout correct?"

The agent gets structured feedback: { status: "approved" | "changes_requested", edited, path, text? }. The user can annotate spatially, type text feedback, or just approve.

MCP tools: render_diagram, open_in_snip, search_screenshots, list_screenshots, get_screenshot, transcribe_screenshot, organize_screenshot, get_categories, install_extension.

Key Shortcuts

ShortcutAction
Cmd+Shift+2Capture screenshot
Cmd+Shift+1Quick Snip (select & copy to clipboard)
Cmd+Shift+SOpen semantic search
Cmd+SSave to disk (in editor)
Esc / EnterCopy to clipboard & close (in editor)
V / R / T / A / G / B / SSelect / Rectangle / Text / Arrow / Tag / Blur / Segment tools
UUpscale image
WTranscribe text

Documentation

DocRoleContents
docs/PRODUCT.mdProduct ManagerVision, feature specs, terminology, product principles
docs/DESIGN.mdDesignerColor palettes (Dark/Light/Glass), component patterns, glass effects, icon specs
docs/ARCHITECTURE.mdDeveloperCode structure, conventions, IPC channels, data flow, key decisions
docs/DEVOPS.mdDevOpsBuild pipeline, signing, native modules, environment setup
docs/USER_FLOWS.mdQA / PMDetailed user flows for every feature, edge cases, test cases
CLAUDE.mdClaude CodeAutonomous agent instructions, role references, documentation rules

Tech Stack

Electron 33 / Fabric.js 7 / Mermaid.js 11 / Ollama (local LLM) / HuggingFace Transformers.js / SlimSAM (ONNX) / Chokidar 4 / electron-liquid-glass

On-Device Models

All AI runs locally — no cloud APIs needed for core features.

ModelPurposeByLink
MiniCPM-VVision LLM (naming, tagging, categorizing)OpenBMBHF
SlimSAM-77-uniformObject segmentationMeta AI / XenovaHF
Swin2SR-lightweight-x2-64Image upscaling (2x)Conde et al. / XenovaHF
all-MiniLM-L6-v2Semantic search embeddingsMicrosoft / XenovaHF
Vision OCRText transcriptionAppleBuilt into macOS

License

MIT

Reviews

No reviews yet

Sign in to write a review