MCP Hub
Back to servers

Memory OS AI

Adaptive memory for AI agents — FAISS search, chat extraction, cross-project linking

Registry
Updated
Mar 3, 2026

Quick Install

uvx memory-os-ai

Memory OS AI

Adaptive memory system for AI agents — universal MCP server for Claude Code, Codex CLI, VS Code Copilot, ChatGPT, and any MCP-compatible client.

Tests Coverage Python MCP VS Code Version License

Concept

Memory OS AI transforms your local documents (PDF, DOCX, images, audio) into a semantic memory queryable by any AI model through the MCP (Model Context Protocol).

┌──────────────────────────────────┐
│  AI Client (any MCP-compatible)  │
│  Claude Code / Codex / Copilot   │
│  ChatGPT / custom agents         │
├──────────────────────────────────┤
│         MCP Protocol             │
│   stdio / SSE / Streamable HTTP  │
├──────────────────────────────────┤
│      Memory OS AI Server         │
│  ┌────────┐  ┌───────────────┐   │
│  │ FAISS  │  │ Chat Extractor│   │
│  │ Index  │  │ (4 sources)   │   │
│  └────────┘  └───────────────┘   │
│  ┌────────────────────────────┐  │
│  │ Cross-Project Linking      │  │
│  └────────────────────────────┘  │
└──────────────────────────────────┘

Features

  • 21 MCP tools for memory management, search, chat persistence, project linking, and cloud storage
  • Semantic search with FAISS + SentenceTransformers (all-MiniLM-L6-v2)
  • Multi-format ingestion: PDF, DOCX, TXT, images (OCR), audio (Whisper), PPTX
  • Chat extraction: auto-detects Claude, ChatGPT, Copilot, and terminal history
  • Cross-project linking: share memory across multiple workspaces
  • Cloud storage overflow: auto-backup to Google Drive, iCloud, Dropbox, OneDrive, S3, Azure, Box, B2
  • 3 transports: stdio (default), SSE (--sse), Streamable HTTP (--http)
  • MCP Resources: memory://documents/*, memory://logs/conversation, memory://linked/*
  • Local-first: all data on your machine by default, cloud only when disk runs low

21 MCP Tools

ToolDescription
memory_ingestIndex a folder of documents into FAISS
memory_searchSemantic search across all indexed content
memory_search_occurrencesCount keyword occurrences across documents
memory_get_contextGet relevant context for the current task
memory_list_documentsList all indexed documents with stats
memory_transcribeTranscribe audio files (Whisper)
memory_statusEngine status (index size, model, device)
memory_compactCompact/deduplicate the FAISS index
memory_chat_syncSync messages from configured chat sources
memory_chat_source_addAdd a chat source (Claude, ChatGPT, etc.)
memory_chat_source_removeRemove a chat source
memory_chat_statusStatus of all chat sources
memory_chat_auto_detectAuto-detect chat workspaces on disk
memory_session_briefFull memory briefing for session start
memory_chat_savePersist conversation messages to memory
memory_project_linkLink another project's memory
memory_project_unlinkUnlink a project
memory_project_listList all linked projects
memory_cloud_configureConfigure cloud storage backend for overflow
memory_cloud_statusShow local disk + cloud storage status
memory_cloud_syncPush/pull/auto-sync between local and cloud

Quick Start

Prerequisites

  • Python 3.10+
  • Optional: tesseract (OCR), ffmpeg (audio), antiword (legacy .doc)
# macOS
brew install tesseract ffmpeg antiword

# Ubuntu/Debian
sudo apt-get install tesseract-ocr ffmpeg antiword

Install

git clone https://github.com/romainsantoli-web/Memory-os-ai.git
cd Memory-os-ai
pip install -e ".[dev,audio]"

Auto-Setup (recommended)

# Setup for your AI client:
memory-os-ai setup claude-code    # Claude Code
memory-os-ai setup codex          # Codex CLI
memory-os-ai setup vscode         # VS Code Copilot
memory-os-ai setup claude-desktop # Claude Desktop
memory-os-ai setup chatgpt        # ChatGPT (manual bridge)
memory-os-ai setup all            # All of the above

# Check status:
memory-os-ai setup status

Manual Start

# stdio (default — Claude Code, VS Code, Codex)
memory-os-ai

# SSE transport (port 8765)
memory-os-ai --sse

# Streamable HTTP (port 8765)
memory-os-ai --http

Project Structure

Memory-os-ai/
├── src/memory_os_ai/
│   ├── __init__.py          # Public API: MemoryEngine, ChatExtractor, TOOL_MODELS
│   ├── __main__.py          # python -m memory_os_ai entry point
│   ├── server.py            # MCP server — 21 tools, 3 transports, resources
│   ├── engine.py            # FAISS engine — indexing, search, compact, session brief
│   ├── cloud_storage.py     # 8 cloud backends (GDrive, iCloud, Dropbox, OneDrive, S3, Azure, Box, B2)
│   ├── storage_router.py    # Smart routing: local-first with cloud overflow
│   ├── models.py            # 21 Pydantic models + TOOL_MODELS registry
│   ├── chat_extractor.py    # 4 extractors: Claude, ChatGPT, Copilot, terminal
│   ├── instructions.py      # MEMORY_INSTRUCTIONS for AI clients
│   └── setup.py             # Auto-setup CLI for 5 AI clients
├── bridges/
│   ├── claude-code/         # CLAUDE.md with memory rules
│   ├── claude-desktop/      # config.json for Claude Desktop
│   ├── codex/               # AGENTS.md for Codex CLI
│   ├── vscode/              # mcp.json for VS Code
│   └── chatgpt/             # mcp-connection.json for ChatGPT
├── tests/                   # 410+ tests — 96% coverage
│   ├── test_memory.py       # Engine + models (60 tests)
│   ├── test_chat_extractor.py  # Chat extraction (39 tests)
│   ├── test_bridges.py      # Bridge configs (22 tests)
│   ├── test_gaps.py         # Compact, cross-project, resources (34 tests)
│   ├── test_server_dispatch.py # Server dispatch + async (61 tests)
│   ├── test_setup.py        # Setup CLI targets
│   ├── test_z_coverage_boost.py # Coverage boost (35 tests)
│   └── test_zz_full_coverage.py # Full coverage (97 tests)
├── pyproject.toml           # v3.1.0 — deps, scripts, coverage config + cloud optional deps
├── Dockerfile               # Container deployment
└── README.md

Cloud Storage (v3.1.0)

When local disk runs low (< 500 MB free by default), memory data automatically overflows to a configured cloud backend.

Supported Providers

ProviderInstallCredentials
Google Drivepip install memory-os-ai[cloud-gdrive]credentials_json or token_json + folder_id
iCloud Drive(macOS native, no extra deps)container name (default: memory-os-ai)
Dropboxpip install memory-os-ai[cloud-dropbox]access_token + folder
OneDrive(auto-detects mount) or Graph APImount_path or access_token
Amazon S3pip install memory-os-ai[cloud-s3]bucket, aws_access_key_id, aws_secret_access_key
Azure Blobpip install memory-os-ai[cloud-azure]connection_string + container
Boxpip install memory-os-ai[cloud-box]access_token + folder_id
Backblaze B2pip install memory-os-ai[cloud-b2]application_key_id, application_key, bucket_name
All providerspip install memory-os-ai[cloud-all]

Usage

# Configure via environment (auto-activates on server start)
export MEMORY_CLOUD_PROVIDER=icloud
export MEMORY_CLOUD_CONFIG='{"container": "memory-os-ai"}'
memory-os-ai

# Or configure at runtime via MCP tool:
#   memory_cloud_configure(provider="s3", credentials={"bucket": "my-bucket", ...})
#   memory_cloud_status()       → local disk + cloud usage
#   memory_cloud_sync("push")   → backup to cloud
#   memory_cloud_sync("pull")   → restore from cloud
#   memory_cloud_sync("auto")   → offload if disk low

Configuration

Environment Variables

VariableDefaultDescription
MEMORY_CACHE_DIR~/.memory-os-aiCache / FAISS index directory
MEMORY_MODELall-MiniLM-L6-v2SentenceTransformer model name
MEMORY_API_KEY(none)Optional API key for SSE/HTTP auth
MEMORY_CLOUD_PROVIDER(none)Cloud provider name (see table above)
MEMORY_CLOUD_CONFIG(none)JSON credentials or path to JSON file
MEMORY_DISK_THRESHOLD524288000Bytes free before cloud overflow (500 MB)

Development

# Install dev dependencies
pip install -e ".[dev]"

# Run tests
python -m pytest tests/ -v

# Run with coverage
python -m pytest tests/ --cov=memory_os_ai --cov-report=term-missing

# Coverage threshold: 80% (enforced in pyproject.toml)

License

GNU Lesser General Public License v3.0 (LGPL-3.0). See LICENSE for details.

For commercial licensing, contact romainsantoli@gmail.com.

Part of the OpenClaw Ecosystem

Memory OS AI is designed to work alongside the OpenClaw agent infrastructure:

RepoDescription
setup-vs-agent-firmFactory for AI agent firms — 28 SKILL.md, 5 SOUL.md, 15 sectors
mcp-openclaw-extensions115 MCP tools — security audit, A2A bridge, fleet management
Memory OS AI (this repo)Semantic memory + chat persistence — universal MCP bridge

Together they form a complete stack: memory (this repo) → skills & souls (setup-vs-agent-firm) → security & orchestration (mcp-openclaw-extensions).

Contributing

Contributions welcome! See CONTRIBUTING.md for guidelines.


⚠️ Contenu généré par IA — validation humaine requise avant utilisation.

Reviews

No reviews yet

Sign in to write a review