MCP Memory Server
A persistent vector memory server for Windsurf, VS Code, and other MCP-compliant editors.
🌟 Philosophy
- Privacy-first, local-first AI memory: Your data stays on your machine.
- No vendor lock-in: Uses open standards and local files.
- Built for MCP: Designed specifically to enhance Windsurf, Cursor, and other MCP-compatible IDEs.
ℹ️ Status (v0.1.0)
Stable:
- ✅ Local MCP memory with Windsurf/Cursor
- ✅ Multi-project isolation
- ✅ Ingestion of Markdown docs
Not stable yet:
- 🚧 Auto-ingest (file watching)
- 🚧 Memory pruning
- 🚧 Remote sync
Note: This server uses MCP stdio transport (not HTTP) to match Windsurf/Cursor’s native MCP integration. Do not try to connect via
curl.
🏥 Health Check
To verify the server binary runs correctly:
# From within the virtual environment
python -m mcp_memory.server --help
✅ Quickstart (5-Minute Setup)
1. Clone and Setup
git clone https://github.com/iamjpsharma/MCPServer.git
cd MCPServer/mcp-memory-server
# Create and activate virtual environment
python3 -m venv .venv
source .venv/bin/activate
# Install dependencies
pip install -e .
2. Configure Windsurf / VS Code
Add this to your mcpServers configuration (e.g., ~/.codeium/windsurf/mcp_config.json):
Note: Replace /ABSOLUTE/PATH/TO/... with the actual full path to this directory.
{
"mcpServers": {
"memory": {
"command": "/ABSOLUTE/PATH/TO/mcp-memory-server/.venv/bin/python",
"args": ["-m", "mcp_memory.server"],
"env": {
"MCP_MEMORY_PATH": "/ABSOLUTE/PATH/TO/mcp-memory-server/mcp_memory_data"
}
}
}
}
🚀 Usage
1. Ingestion (Adding Context)
Use the included helper script ingest.sh to add files to a specific project.
# ingest.sh <project_name> <file1> <file2> ...
# Example: Project "Thaama"
./ingest.sh project-thaama \
docs/architecture.md \
src/main.py
# Example: Project "OpenClaw"
./ingest.sh project-openclaw \
README.md \
CONTRIBUTING.md
💡 Project ID Naming Convention
It is recommended to use a consistent prefix for your project IDs to avoid collisions:
project-thaamaproject-openclawproject-myapp
2. Connect in Editor
Once configured, the following tools will be available to the AI Assistant:
memory_search(project_id, q): Semantic search for "project-thaama", "project-openclaw", etc.memory_add(project_id, id, text): Manual addition of memory fragments.
The AI will effectively have "long-term memory" of the files you ingested.
🛠 Troubleshooting
-
"No MCP server found" or Connection errors:
- Check the output of
pwdto ensure your absolute paths inmcp_config.jsonare 100% correct. - Ensure the virtual environment (
.venv) is created and dependencies are installed.
- Check the output of
-
"Wrong project_id used":
- The AI sometimes guesses the project ID. You can explicitly tell it: "Use project_id 'project-thaama'".
-
Embedding Model Downloads:
- On the first run, the server downloads the
all-MiniLM-L6-v2model (approx 100MB). This may cause a slight delay on the first request.
- On the first run, the server downloads the
📁 Repo Structure
/
├── src/mcp_memory/
│ ├── server.py # Main MCP server entry point
│ ├── ingest.py # Ingestion logic
│ └── db.py # LanceDB wrapper
├── ingest.sh # Helper script
├── requirements.txt # Top-level dependencies
├── pyproject.toml # Package config
├── mcp_memory_data/ # Persistent vector storage (gitignored)
└── README.md
🗺️ Roadmap
- Local vector storage (LanceDB)
- Multi-project isolation
- Markdown ingestion
- Improved chunking strategies (semantic chunking)
- Support for PDF ingestion
- Optional HTTP transport wrapper