FEATURES

Everything your agent needs to remember

Three-signal retrieval. Local embeddings. Wiki-link knowledge graph. MCP-native. No cloud, no API keys, no vendor lock-in.

90% reduction in redundant context tokens
0 API keys required
sessions remembered
100% of your data stays on your machine

Without memory, your agent re-explains your entire codebase every session. Hundreds of tokens. Every time. Ori surfaces exactly what's relevant instead.

Three signals. One ranked result.

Most memory tools use one search method. Ori combines three — and the combination is what makes it work.

01

Vector Search

Semantic similarity via local embeddings (Xenova/all-MiniLM-L6-v2). Finds conceptually related notes even when the vocabulary differs. "Token incentives" matches "engagement mechanics." Runs entirely on your machine — no API, no latency, no cost.

02

Keyword Matching

Traditional full-text search for exact terms and phrases. Fast, deterministic, catches things semantic search misses. Function names, specific error messages, exact quotes — keyword search finds them reliably when you need precision over fuzzy recall.

03

Graph Spreading Activation

Wiki-links between notes create a knowledge graph. When a search hits a note, activation spreads to connected notes through those links — surfacing context that neither keyword nor vector search would find alone. The whole is greater than the sum of its parts.

Markdown files. Not a database.

Every competitor stores your memory in an opaque database. Ori stores it as plain markdown files you can read, edit, and own forever.

A memory note looks like this

---
description: Redis gives 10x lower latency
  than Postgres for session state
type: decision
project: [backend]
status: active
created: 2026-03-04
---

Chose Redis over Postgres for session caching
after benchmarking. Postgres p99 was 80ms,
Redis p99 was 8ms at our traffic level.

Relevant Notes:
- [[session architecture overview]]
- [[database selection criteria]]

What that means for you

  • Open your memory in Obsidian, VS Code, or any text editor
  • Version your agent's memory with Git — full history, diffs, branches
  • If Ori disappears tomorrow, your files still work
  • Share specific notes with teammates via Git
  • Search your memory with grep, ripgrep, or any tool you already use
  • No database to migrate, no export to worry about

14 tools. Every operation your agent needs.

Works with Claude Code, Cursor, Windsurf, Cline, or any MCP-compatible client. Add to your config and your agent wakes up with memory.

ori_orient

Session briefing — daily status, reminders, vault health, active goals. Call at session start.

ori_query_ranked

Full 3-signal retrieval with intent classification and spreading activation. The main search tool.

ori_add

Capture a new insight to inbox. Fast, atomic, with prose-as-title convention.

ori_promote

Promote inbox note to the graph with classification, linking, and area assignment.

ori_query_similar

Pure vector search — faster single-signal retrieval when you need speed over depth.

ori_query_important

Notes ranked by PageRank — find the most connected, structurally important memories.

ori_update

Update agent identity, goals, methodology, daily log, or reminders.

ori_health

Full vault diagnostic — orphans, dangling links, schema violations, throughput.

How Ori compares

The alternatives are well-funded. But they're building databases. We're building files.

Ori Mnemos Mem0 Zep Letta
Storage Markdown files Cloud database Neo4j graph DB Proprietary
Local-first ✓ Always Optional add-on
API keys needed None Required Required Required
Human-readable ✓ Plain text
Git-versionable ✓ Native
MCP-native ✓ Built for it Bolted on
Open source ✓ Apache-2.0 Partial Partial
Vendor lock-in None High High Medium

Give your agent memory in three commands.

npm i -g ori-memory
ori init
ori serve --mcp