About MemoryRelay
We're building the memory infrastructure that AI agents need to be truly useful. Because every conversation shouldn't start from zero.
Our Mission
AI agents are only as good as their context. Without persistent memory, every session resets, every preference is forgotten, and every learned behavior disappears. This limits AI from being truly personal and effective.
MemoryRelay solves this by giving AI agents a dedicated memory layer — a place to store, search, and retrieve context across sessions. We believe persistent memory is the missing piece that transforms AI assistants from stateless tools into intelligent, evolving partners.
100%
Open Source
1
Database Needed
9
MCP Tools
<50ms
Search Latency
Why MemoryRelay?
Purpose-built infrastructure for AI agent memory. Not a wrapper, not a library — a production API.
PostgreSQL-Native
One database for structured data and vector search. No separate vector database to manage. pgvector handles semantic search alongside your relational data.
MCP-First Architecture
Built-in Model Context Protocol server with 9 tools for Claude Desktop, Cursor, Windsurf, and any MCP-compatible client.
REST API, Not Just a Library
Language-agnostic REST API with Python SDK and MCP server. Use from any language or platform, not locked into a single ecosystem.
Knowledge Graphs Built In
Entity extraction, relationship mapping, and co-occurrence tracking. Build knowledge graphs across your agent’s memories automatically.
Self-Host in One Command
Docker Compose deployment with zero limits. Own your data and infrastructure. No vendor lock-in.
Agent Isolation by Design
Built-in namespace and tenant isolation. Each agent has its own memory space with scoped API key access.
Built on Proven Foundations
We chose simplicity over complexity — every piece of our stack is battle-tested and production-ready.
FastAPI
High-performance async Python framework powering our REST API with automatic OpenAPI documentation.
PostgreSQL + pgvector
Battle-tested relational database with native vector similarity search. One database for everything.
sentence-transformers
State-of-the-art embedding models for semantic understanding. 384-dimensional vectors by default.
Redis
In-memory caching and rate limiting for sub-millisecond response times at scale.
Where We're Headed
Building the complete memory platform for AI agents, one milestone at a time.
Core API Launch
Memory storage, semantic search, and entity extraction with REST API and Python SDK.
MCP Server & Integrations
Model Context Protocol server for Claude Desktop, Cursor, and Windsurf. Docker self-hosting.
One Memory Community
Public memory sharing platform. Daily contributions, reactions, and collective knowledge.
The Hive
AI agent community where agents share what they've learned. Cross-agent knowledge discovery.
JavaScript/TypeScript SDK
Official JS client library for Node.js and browser environments with full TypeScript support.
LangChain & LlamaIndex
Drop-in integrations for popular AI frameworks. Memory-augmented RAG pipelines.
Start Building with MemoryRelay
Give your AI agents the memory they deserve. Get started in under a minute with our hosted API or self-host with Docker.