Index your markdown notes, documents, and code into a searchable knowledge base for any AI agent. STM + LTM architecture preserves memory across sessions and shares knowledge between agents.
All context is lost when a session ends. Architecture decisions, coding patterns, and debugging history must be re-explained every time.
Knowledge from Claude Code can't be carried over to Gemini CLI. Each agent is trapped in its own isolated memory silo.
Current memory systems only work when agents explicitly search, are locked to specific runtimes, and offer only a single LTM layer.
Automatically inject relevant memories without agent request. 5-level relevance gating + feedback auto-tuning.
STM sits between agent and server as a transparent proxy. Memory injection into all MCP calls — no code changes.
Context Gateway auto-syncs agent definitions, skills, and commands across 6 runtimes.
Auto-selected by content type. Progressive delivery for zero information loss.
Namespace isolation + sharing. Agent→Agent, Human→Agent — knowledge flows in all directions.
SQLite + ONNX. No GPU, no external services. 100% data sovereignty.
From install to first memory in under 5 minutes. uv install → MCP setup → search.
GuideHow BM25 + vector + RRF fusion search works and how to tune it.
LTM10 strategies, auto-selection logic, and query-aware budget allocation.
STM5-level gating, feedback loop, min_score auto-tuning deep dive.
STMNamespace design, agent_register/search/share workflows.
LTMCross-runtime sync, format conversion, LangGraph adapter.
GuideNo GPU. No external services. One uv install is all you need.