MCP Hub
Back to servers

agentmem

Lightweight persistent memory for AI agents. Hybrid search. 16 tools. One SQLite file.

Updated
Feb 21, 2026

agentmem

mcp-name: io.github.oxgeneral/agentmem

Lightweight persistent memory for AI agents. One SQLite file. Hybrid search (keywords + semantics). Zero to 12MB install.

No PyTorch. No cloud. No server. Just memory.

Why

Every AI agent session starts from zero. Context windows compress, conversations end, memory vanishes. agentmem gives agents persistent memory that survives across sessions — in a single SQLite file.

  • Hybrid search: FTS5 full-text keywords + vector semantic search, fused with adaptive ranking
  • 4 operational modes: from zero dependencies (stdlib only) to best quality (12MB)
  • 16 MCP tools: recall, remember, save_state, compact, consolidate, entities, and more
  • HTTP REST API: 14 endpoints, zero-dependency server, CORS-ready
  • 5 memory tiers: core, learned, episodic, working (auto-expires), procedural (behavioral rules)
  • Namespaces: multi-user, multi-agent memory isolation
  • Temporal versioning: fact evolution chains with supersedes tracking
  • Entity extraction: auto-extracts @mentions, URLs, IPs, env vars, money amounts
  • Conversation extraction: auto-extracts facts, decisions, TODOs from chat history
  • Importance scoring: auto-scores memories by tier, length, specificity, structure
  • Memory consolidation: finds and merges near-duplicate memories
  • Recency boost: newer memories rank higher with configurable decay
  • Multilingual: Russian keywords via FTS5, English semantics via embeddings
  • Fast: <1ms/query hybrid search, <5ms cold start, <0.2ms/chunk import

Install

# Best quality (sqlite-vec + model2vec, 12MB total)
pip install agentmem[all]

# Minimal (sqlite-vec + hash embeddings, 151KB)
pip install agentmem

# Zero dependencies (pure Python, stdlib only)
pip install agentmem --no-deps

Quick Start

Python API

from agentmem import MemoryStore, get_embedding_model

# Auto-selects best available backend
embed = get_embedding_model()
store = MemoryStore("memory.db", embedding_dim=embed.dim)
store.set_embed_fn(embed)

# Store memories with namespaces
store.remember("Server costs $50/month", tier="core", namespace="infra")
store.remember("API returns 403 without auth", tier="learned", namespace="api")
store.remember("Deployed v2.1 at 15:30", tier="episodic")

# Search — hybrid keyword + semantic, with recency boost
results = store.recall("server costs", recency_weight=0.15)

# Namespace isolation
results = store.recall("server", namespace="infra")

# Save working state before context compression
store.save_state("Working on auth fix, step 3/5, blocked by CORS")

# Add behavioral rules (procedural memory)
store.add_procedure("Always use HTTPS in production")
store.add_procedure("Never expose debug endpoints")
rules = store.get_procedures()  # → formatted for system prompt

# Update facts with version chain
store.update_memory(old_id=1, new_content="Server costs $75/month")
history = store.history(memory_id=2)  # → trace fact evolution

# Find related memories by entity
related = store.related("10.0.0.1")  # → all memories mentioning this IP
entities = store.entities(entity_type="ip")  # → list all known IPs

# Auto-extract from conversations
messages = [
    {"role": "user", "content": "Set API_KEY to sk-abc123. Always validate input."},
    {"role": "assistant", "content": "Noted. I decided to use pydantic for validation."},
]
result = store.process_conversation(messages, namespace="project")
# → extracts config, preferences, decisions automatically

# Maintenance
store.compact(max_age_days=90)  # archive old low-value memories
store.consolidate(similarity_threshold=0.85)  # merge near-duplicates

# Import markdown files
store.import_markdown("MEMORY.md", tier="core")

CLI

# Initialize database
agentmem init --db memory.db

# Import markdown files
agentmem import MEMORY.md --tier core -n my-agent
agentmem import-dir ./daily-logs/ --tier episodic

# Search with namespace filter
agentmem search "deployment process" --limit 5 -n infra

# Manage procedures
agentmem add-procedure "Always use markdown formatting"
agentmem procedures

# View entities and relations
agentmem entities --type ip
agentmem related 10.0.0.1

# Maintenance
agentmem compact --max-age-days 90 --dry-run
agentmem consolidate --threshold 0.85

# Process conversation
agentmem process chat.json -n project

# Stats and export
agentmem stats
agentmem export --tier core

MCP Server (stdio)

python -m agentmem --db memory.db

Add to your MCP client config:

{
  "mcpServers": {
    "memory": {
      "command": "python",
      "args": ["-m", "agentmem", "--db", "/path/to/memory.db"]
    }
  }
}

HTTP REST API

# Start HTTP server
agentmem serve-http --port 8422

# Or directly
agentmem-http --port 8422 --db memory.db
# Store a memory
curl -X POST http://localhost:8422/remember \
  -H "Content-Type: application/json" \
  -d '{"content": "Server IP is 10.0.0.1", "tier": "core", "namespace": "infra"}'

# Search
curl "http://localhost:8422/recall?query=server+IP&namespace=infra"

# Health check
curl http://localhost:8422/health

16 MCP tools / 14 HTTP endpoints:

ToolHTTPDescription
recallGET /recallHybrid keyword + semantic search
rememberPOST /rememberStore a new memory
save_statePOST /save_stateEmergency save before context compression
todayGET /todayGet all memories from today
forgetPOST /forgetArchive a memory (soft delete)
unarchivePOST /unarchiveRestore an archived memory
statsGET /statsMemory statistics and health
compactPOST /compactArchive low-value memories
consolidatePOST /consolidateMerge near-duplicate memories
update_memoryPOST /update_memoryReplace a memory with version chain
historyGET /historyTrace fact version history
relatedGET /relatedFind memories by entity
entitiesGET /entitiesList all extracted entities
get_proceduresGet behavioral rules for system prompt
add_procedureAdd a behavioral rule
process_conversationAuto-extract from chat history

Memory Tiers

TierPurposeAuto-compactedExample
corePermanent factsNever"Server IP is 10.0.0.1"
proceduralBehavioral rulesNever"Always use HTTPS"
learnedDiscovered knowledgeAfter 90 days"API returns 403 without auth"
episodicEventsAfter 90 days"Deployed v2.1 at 15:30"
workingCurrent task stateAfter 24 hours"Working on step 3/5"

Namespaces

Isolate memories per user, agent, or project:

# Store in namespaces
store.remember("Alice's API key", namespace="user/alice")
store.remember("Bob's config", namespace="user/bob")
store.remember("Shared fact", namespace="team")

# Search within namespace (prefix matching)
store.recall("API", namespace="user/alice")  # only Alice's memories
store.recall("API", namespace="user")  # Alice + Bob (prefix match)
store.recall("API")  # everything

Temporal Versioning

Track how facts evolve over time:

# Initial fact
r1 = store.remember("Server costs $50/month", tier="core")

# Fact changes — old version archived, linked via supersedes
r2 = store.update_memory(r1["id"], "Server costs $75/month")

# Trace the history
history = store.history(r2["id"])
# → [{"id": 2, "content": "...$75..."}, {"id": 1, "content": "...$50..."}]

Entity Extraction

Automatic regex-based NER on every remember() call:

TypePatternExample
mention@username@alice
urlhttps://...https://api.example.com
ipN.N.N.N10.0.0.1
port:NNNN:8080
emailuser@domainadmin@example.com
env_varALL_CAPSOPENAI_API_KEY
money$NNN$50
path/unix/path/etc/nginx/conf.d
hashtag#tag#deployment
# Find all memories mentioning an entity
store.related("10.0.0.1")
store.related("@alice", entity_type="mention")

# List all known entities
store.entities()  # sorted by memory count
store.entities(entity_type="ip")

Conversation Auto-Extraction

Auto-extract memories from chat history (regex-only, no LLM):

messages = [
    {"role": "user", "content": "Set DATABASE_URL to postgres://localhost/mydb"},
    {"role": "assistant", "content": "I decided to use connection pooling. Important: max 20 connections."},
    {"role": "user", "content": "Always validate input. TODO: add rate limiting."},
]
result = store.process_conversation(messages)
# Extracts: config→core, decisions→episodic, preferences→procedural, todos→working, important→core

Operational Modes

agentmem automatically selects the best available mode:

ModeInstall SizeInit TimeQuery TimeDependencies
sqlite-vec + model2vec12 MB~5ms*~1mssqlite-vec, model2vec, numpy
sqlite-vec + hash151 KB~5ms~0.8mssqlite-vec
pure Python + hash0 KB~3ms~1.8msnone (stdlib only)
pure + int8 quantize0 KB~3ms~3msnone (stdlib only)

*With lazy loading — model2vec loads on first query, not on init

Architecture

┌──────────────────────────────────────────────┐
│              MemoryStore                      │
│  ┌──────────┐  ┌──────────┐  ┌────────────┐  │
│  │  FTS5    │  │  Vector  │  │  Entity    │  │
│  │ keywords │  │  Index   │  │  Index     │  │
│  │ + BM25   │  │ cosine   │  │  regex NER │  │
│  └────┬─────┘  └────┬─────┘  └─────┬──────┘  │
│       └──────┬───────┘              │         │
│    Adaptive Hybrid Scorer           │         │
│  (query classify + recency +        │         │
│   importance boost)                 │         │
│  ┌──────────────────────────────────┴───────┐ │
│  │            SQLite + WAL                  │ │
│  │  memories │ memories_fts │ vecs │ entities│ │
│  └──────────────────────────────────────────┘ │
│            One file: memory.db                │
└───────────────────────────────────────────────┘

Comparison

FeatureagentmemChromaDBLanceDBmem0Zep
Install size0-12 MB400+ MB100+ MB500+ MBCloud
Cold start3-5 mssecondssecondssecondsN/A
PyTorch requiredNoYesNoYesN/A
Cloud requiredNoNoNoYesYes
Zero-dep modeYesNoNoNoNo
Keyword searchFTS5 (BM25)NoNoNoYes
MCP server16 toolsNoNoYesNo
HTTP APIBuilt-inYesNoYesYes
Single file DBYesNoYesNoNo
NamespacesYesYesYesYesYes
Temporal versioningYesNoYesNoYes
Entity extractionAuto (regex)NoNoNoNo
Procedural memoryYesNoNoNoNo
Importance scoringAutoNoNoNoNo
Conversation extractionAuto (regex)NoNoYes (LLM)Yes (LLM)
Memory consolidationYesNoNoYes (LLM)No

License

MIT

Reviews

No reviews yet

Sign in to write a review