MCP Hub
Back to servers

superlocalmemory

Validation Failed

Your AI Finally Remembers You - Local-first intelligent memory system for AI assistants. Works with Claude, Cursor, Windsurf, VS Code/Copilot, Codex, and 16+ AI tools. 100% local, zero cloud dependencies.

Stars
8
Updated
Feb 9, 2026
Validated
Feb 11, 2026

Validation Error:

Process exited with code 1. stderr:

Quick Install

npx -y superlocalmemory

SuperLocalMemory V2

Your AI Finally Remembers You

⚡ Created & Architected by Varun Pratap Bhardwaj
Solution Architect • Original Creator • 2026

Stop re-explaining your codebase every session. 100% local. Zero setup. Completely free.

Python 3.8+ MIT License 100% Local 5 Min Setup Cross Platform Wiki

Quick StartWhy This?Featuresvs AlternativesDocsIssues

Created by Varun Pratap Bhardwaj💖 Sponsor📜 Attribution Required


Install in One Command

npm install -g superlocalmemory

Or clone manually:

git clone https://github.com/varun369/SuperLocalMemoryV2.git && cd SuperLocalMemoryV2 && ./install.sh

Both methods auto-detect and configure 16+ IDEs and AI tools — Cursor, VS Code/Copilot, Codex, Claude, Windsurf, Gemini CLI, JetBrains, and more.


The Problem

Every time you start a new Claude session:

You: "Remember that authentication bug we fixed last week?"
Claude: "I don't have access to previous conversations..."
You: *sighs and explains everything again*

AI assistants forget everything between sessions. You waste time re-explaining your:

  • Project architecture
  • Coding preferences
  • Previous decisions
  • Debugging history

The Solution

# Install in one command
npm install -g superlocalmemory

# Save a memory
superlocalmemoryv2:remember "Fixed auth bug - JWT tokens were expiring too fast, increased to 24h"

# Later, in a new session...
superlocalmemoryv2:recall "auth bug"
# ✓ Found: "Fixed auth bug - JWT tokens were expiring too fast, increased to 24h"

Your AI now remembers everything. Forever. Locally. For free.


🚀 Quick Start

npm (Recommended — All Platforms)

npm install -g superlocalmemory

Mac/Linux (Manual)

git clone https://github.com/varun369/SuperLocalMemoryV2.git
cd SuperLocalMemoryV2
./install.sh

Windows (PowerShell)

git clone https://github.com/varun369/SuperLocalMemoryV2.git
cd SuperLocalMemoryV2
.\install.ps1

Verify Installation

superlocalmemoryv2:status
# ✓ Database: OK (0 memories)
# ✓ Graph: Ready
# ✓ Patterns: Ready

That's it. No Docker. No API keys. No cloud accounts. No configuration.

Updating to Latest Version

npm users:

# Update to latest version
npm update -g superlocalmemory

# Or force latest
npm install -g superlocalmemory@latest

# Install specific version
npm install -g superlocalmemory@2.3.7

Manual install users:

cd SuperLocalMemoryV2
git pull origin main
./install.sh  # Mac/Linux
# or
.\install.ps1  # Windows

Your data is safe: Updates preserve your database and all memories.

Start the Visualization Dashboard

# Launch the interactive web UI
python3 ~/.claude-memory/ui_server.py

# Opens at http://localhost:8765
# Features: Timeline view, search explorer, graph visualization

🎨 Visualization Dashboard

NEW in v2.2.0: Interactive web-based dashboard for exploring your memories visually.

Features

FeatureDescription
📈 Timeline ViewSee your memories chronologically with importance indicators
🔍 Search ExplorerReal-time semantic search with score visualization
🕸️ Graph VisualizationInteractive knowledge graph with clusters and relationships
📊 Statistics DashboardMemory trends, tag clouds, pattern insights
🎯 Advanced FiltersFilter by tags, importance, date range, clusters

Quick Tour

# 1. Start dashboard
python ~/.claude-memory/ui_server.py

# 2. Navigate to http://localhost:8765

# 3. Explore your memories:
#    - Timeline: See memories over time
#    - Search: Find with semantic scoring
#    - Graph: Visualize relationships
#    - Stats: Analyze patterns

[[Complete Dashboard Guide →|Visualization-Dashboard]]


🔍 Advanced Search

SuperLocalMemory V2.2.0 implements hybrid search combining multiple strategies for maximum accuracy.

Search Strategies

StrategyMethodBest ForSpeed
Semantic SearchTF-IDF vectors + cosine similarityConceptual queries ("authentication patterns")45ms
Full-Text SearchSQLite FTS5 with rankingExact phrases ("JWT tokens expire")30ms
Graph-EnhancedKnowledge graph traversalRelated concepts ("show auth-related")60ms
Hybrid ModeAll three combinedGeneral queries80ms

Search Examples

# Semantic: finds conceptually similar
slm recall "security best practices"
# Matches: "JWT implementation", "OAuth flow", "CSRF protection"

# Exact: finds literal text
slm recall "PostgreSQL 15"
# Matches: exactly "PostgreSQL 15"

# Graph: finds related via clusters
slm recall "authentication" --use-graph
# Matches: JWT, OAuth, sessions (via "Auth & Security" cluster)

# Hybrid: best of all worlds (default)
slm recall "API design patterns"
# Combines semantic + exact + graph for optimal results

Search Performance by Dataset Size

MemoriesSemanticFTS5GraphHybrid
10035ms25ms50ms65ms
50045ms30ms60ms80ms
1,00055ms35ms70ms95ms
5,00085ms50ms110ms150ms

All search strategies remain sub-second even with 5,000+ memories.


⚡ Performance

Benchmarks (v2.2.0)

OperationTimeComparisonNotes
Add Memory< 10ms-Instant indexing
Search (Hybrid)80ms3.3x faster than v1500 memories
Graph Build< 2s-100 memories
Pattern Learning< 2s-Incremental
Dashboard Load< 500ms-1,000 memories
Timeline Render< 300ms-All memories

Storage Efficiency

TierDescriptionCompressionSavings
Tier 1Active memories (0-30 days)None-
Tier 2Warm memories (30-90 days)60%Progressive summarization
Tier 3Cold storage (90+ days)96%JSON archival

Example: 1,000 memories with mixed ages = ~15MB (vs 380MB uncompressed)

Scalability

Dataset SizeSearch TimeGraph BuildRAM Usage
100 memories35ms0.5s< 30MB
500 memories45ms2s< 50MB
1,000 memories55ms5s< 80MB
5,000 memories85ms30s< 150MB

Tested up to 10,000 memories with linear scaling and no degradation.


🌐 Works Everywhere

SuperLocalMemory V2 is the ONLY memory system that works across ALL your tools:

Supported IDEs & Tools

ToolIntegrationHow It Works
Claude Code✅ Skills + MCP/superlocalmemoryv2:remember
Cursor✅ MCP + SkillsAI uses memory tools natively
Windsurf✅ MCP + SkillsNative memory access
Claude Desktop✅ MCPBuilt-in support
OpenAI Codex✅ MCP + SkillsAuto-configured (TOML)
VS Code / Copilot✅ MCP + Skills.vscode/mcp.json
Continue.dev✅ MCP + Skills/slm-remember
Cody✅ Custom Commands/slm-remember
Gemini CLI✅ MCP + SkillsNative MCP + skills
JetBrains IDEs✅ MCPVia AI Assistant settings
Zed Editor✅ MCPNative MCP tools
OpenCode✅ MCPNative MCP tools
Perplexity✅ MCPNative MCP tools
Antigravity✅ MCP + SkillsNative MCP tools
ChatGPT✅ MCP Connectorsearch() + fetch() via HTTP tunnel
Aider✅ Smart Wrapperaider-smart with context
Any Terminal✅ Universal CLIslm remember "content"

Three Ways to Access

  1. MCP (Model Context Protocol) - Auto-configured for Cursor, Windsurf, Claude Desktop

    • AI assistants get natural access to your memory
    • No manual commands needed
    • "Remember that we use FastAPI" just works
  2. Skills & Commands - For Claude Code, Continue.dev, Cody

    • /superlocalmemoryv2:remember in Claude Code
    • /slm-remember in Continue.dev and Cody
    • Familiar slash command interface
  3. Universal CLI - Works in any terminal or script

    • slm remember "content" - Simple, clean syntax
    • slm recall "query" - Search from anywhere
    • aider-smart - Aider with auto-context injection

All three methods use the SAME local database. No data duplication, no conflicts.

Auto-Detection

Installation automatically detects and configures:

  • Existing IDEs (Cursor, Windsurf, VS Code)
  • Installed tools (Aider, Continue, Cody)
  • Shell environment (bash, zsh)

Zero manual configuration required. It just works.

Manual Setup for Other Apps

Want to use SuperLocalMemory in ChatGPT, Perplexity, Zed, or other MCP-compatible tools?

📘 Complete setup guide: docs/MCP-MANUAL-SETUP.md

Covers:

  • ChatGPT Desktop - Add via Settings → MCP
  • Perplexity - Configure via app settings
  • Zed Editor - JSON configuration
  • Cody - VS Code/JetBrains setup
  • Custom MCP clients - Python/HTTP integration

All tools connect to the same local database - no data duplication.


💡 Why SuperLocalMemory?

For Developers Who Use AI Daily

ScenarioWithout MemoryWith SuperLocalMemory
New Claude sessionRe-explain entire projectrecall "project context" → instant context
Debugging"We tried X last week..." starts overKnowledge graph shows related past fixes
Code preferences"I prefer React..." every timePattern learning knows your style
Multi-projectContext constantly bleedsSeparate profiles per project

Built on 2026 Research

Not another simple key-value store. SuperLocalMemory implements cutting-edge memory architecture:

  • PageIndex (Meta AI) → Hierarchical memory organization
  • GraphRAG (Microsoft) → Knowledge graph with auto-clustering
  • xMemory (Stanford) → Identity pattern learning
  • A-RAG → Multi-level retrieval with context awareness

The only open-source implementation combining all four approaches.


🆚 vs Alternatives

The Hard Truth About "Free" Tiers

SolutionFree Tier LimitsPaid PriceWhat's Missing
Mem010K memories, limited APIUsage-basedNo pattern learning, not local
ZepLimited credits$50/monthCredit system, cloud-only
Supermemory1M tokens, 10K queries$19-399/moNot local, no graphs
Personal.AI❌ No free tier$33/monthCloud-only, closed ecosystem
Letta/MemGPTSelf-hosted (complex)TBDRequires significant setup
SuperLocalMemory V2Unlimited$0 foreverNothing.

Feature Comparison (What Actually Matters)

FeatureMem0ZepKhojLettaSuperLocalMemory V2
Works in CursorCloud OnlyLocal
Works in WindsurfCloud OnlyLocal
Works in VS Code3rd PartyPartialNative
Works in Claude
Works with Aider
Universal CLI
7-Layer Universal Architecture
Pattern Learning
Multi-Profile SupportPartial
Knowledge Graphs
100% LocalPartialPartial
Zero Setup
Progressive Compression
Completely FreeLimitedLimitedPartial

SuperLocalMemory V2 is the ONLY solution that:

  • ✅ Works across 16+ IDEs and CLI tools
  • ✅ Remains 100% local (no cloud dependencies)
  • ✅ Completely free with unlimited memories

See full competitive analysis →


✨ Features

Multi-Layer Memory Architecture

┌─────────────────────────────────────────────────────────────┐
│  Layer 9: VISUALIZATION (NEW v2.2.0)                        │
│  Interactive dashboard: timeline, search, graph explorer    │
│  Real-time analytics and visual insights                    │
├─────────────────────────────────────────────────────────────┤
│  Layer 8: HYBRID SEARCH (NEW v2.2.0)                        │
│  Combines: Semantic + FTS5 + Graph traversal                │
│  80ms response time with maximum accuracy                   │
├─────────────────────────────────────────────────────────────┤
│  Layer 7: UNIVERSAL ACCESS                                  │
│  MCP + Skills + CLI (works everywhere)                      │
│  16+ IDEs with single database                              │
├─────────────────────────────────────────────────────────────┤
│  Layer 6: MCP INTEGRATION                                   │
│  Model Context Protocol: 6 tools, 4 resources, 2 prompts    │
│  Auto-configured for Cursor, Windsurf, Claude               │
├─────────────────────────────────────────────────────────────┤
│  Layer 5: SKILLS LAYER                                      │
│  6 universal slash-commands for AI assistants               │
│  Compatible with Claude Code, Continue, Cody                │
├─────────────────────────────────────────────────────────────┤
│  Layer 4: PATTERN LEARNING                                  │
│  Learns: coding style, preferences, terminology             │
│  "You prefer React over Vue" (73% confidence)               │
├─────────────────────────────────────────────────────────────┤
│  Layer 3: KNOWLEDGE GRAPH                                   │
│  Auto-clusters: "Auth & Tokens", "Performance", "Testing"   │
│  Discovers relationships you didn't know existed            │
├─────────────────────────────────────────────────────────────┤
│  Layer 2: HIERARCHICAL INDEX                                │
│  Tree structure for fast navigation                         │
│  O(log n) lookups instead of O(n) scans                     │
├─────────────────────────────────────────────────────────────┤
│  Layer 1: RAW STORAGE                                       │
│  SQLite + Full-text search + TF-IDF vectors                 │
│  Compression: 60-96% space savings                          │
└─────────────────────────────────────────────────────────────┘

Knowledge Graph (It's Magic)

# Build the graph from your memories
python ~/.claude-memory/graph_engine.py build

# Output:
# ✓ Processed 47 memories
# ✓ Created 12 clusters:
#   - "Authentication & Tokens" (8 memories)
#   - "Performance Optimization" (6 memories)
#   - "React Components" (11 memories)
#   - "Database Queries" (5 memories)
#   ...

The graph automatically discovers relationships. Ask "what relates to auth?" and get JWT, session management, token refresh—even if you never tagged them together.

Pattern Learning (It Knows You)

# Learn patterns from your memories
python ~/.claude-memory/pattern_learner.py update

# Get your coding identity
python ~/.claude-memory/pattern_learner.py context 0.5

# Output:
# Your Coding Identity:
# - Framework preference: React (73% confidence)
# - Style: Performance over readability (58% confidence)
# - Testing: Jest + React Testing Library (65% confidence)
# - API style: REST over GraphQL (81% confidence)

Your AI assistant can now match your preferences automatically.

Multi-Profile Support

# Work profile
superlocalmemoryv2:profile create work --description "Day job"
superlocalmemoryv2:profile switch work

# Personal projects
superlocalmemoryv2:profile create personal
superlocalmemoryv2:profile switch personal

# Client projects (completely isolated)
superlocalmemoryv2:profile create client-acme

Each profile has isolated memories, graphs, and patterns. No context bleeding.


📖 Documentation

GuideDescription
Quick StartGet running in 5 minutes
InstallationDetailed setup instructions
Visualization DashboardInteractive web UI guide (NEW v2.2.0)
CLI ReferenceAll commands explained
Knowledge GraphHow clustering works
Pattern LearningIdentity extraction
Profiles GuideMulti-context management
API ReferencePython API documentation

🔧 CLI Commands

# Memory Operations
superlocalmemoryv2:remember "content" --tags tag1,tag2  # Save memory
superlocalmemoryv2:recall "search query"                 # Search
superlocalmemoryv2:list                                  # Recent memories
superlocalmemoryv2:status                                # System health

# Profile Management
superlocalmemoryv2:profile list                          # Show all profiles
superlocalmemoryv2:profile create <name>                 # New profile
superlocalmemoryv2:profile switch <name>                 # Switch context

# Knowledge Graph
python ~/.claude-memory/graph_engine.py build            # Build graph
python ~/.claude-memory/graph_engine.py stats            # View clusters
python ~/.claude-memory/graph_engine.py related --id 5   # Find related

# Pattern Learning
python ~/.claude-memory/pattern_learner.py update        # Learn patterns
python ~/.claude-memory/pattern_learner.py context 0.5   # Get identity

# Reset (Use with caution!)
superlocalmemoryv2:reset soft                            # Clear memories
superlocalmemoryv2:reset hard --confirm                  # Nuclear option

📊 Performance

SEO: Performance benchmarks, memory system speed, search latency, visualization dashboard performance

MetricResultNotes
Hybrid search80msSemantic + FTS5 + Graph combined
Semantic search45ms3.3x faster than v1
FTS5 search30msExact phrase matching
Graph build (100 memories)< 2 secondsLeiden clustering
Pattern learning< 2 secondsIncremental updates
Dashboard load< 500ms1,000 memories
Timeline render< 300msAll memories visualized
Storage compression60-96% reductionProgressive tiering
Memory overhead< 50MB RAMLightweight

Tested up to 10,000 memories with sub-second search times and linear scaling.


🤝 Contributing

We welcome contributions! See CONTRIBUTING.md for guidelines.

Areas for contribution:

  • Additional pattern categories
  • Graph visualization UI
  • Integration with more AI assistants
  • Performance optimizations
  • Documentation improvements

💖 Support This Project

If SuperLocalMemory saves you time, consider supporting its development:


📜 License

MIT License — use freely, even commercially. Just include the license.


👨‍💻 Author

Varun Pratap Bhardwaj — Solution Architect

GitHub

Building tools that make AI actually useful for developers.


100% local. 100% private. 100% yours.

Star on GitHub

Reviews

No reviews yet

Sign in to write a review