An open-source SQL-native memory engine for LLMs, AI Agents, and multi-agent systems, enabling persistent and queryable AI memory with a single line of code.
Memori - Detailed Introduction to the Open-source AI Memory Engine
Project Overview
Memori is an open-source, SQL-native memory engine designed for Large Language Models (LLMs), AI Agents, and multi-agent systems. It enables any LLM to possess persistent, queryable memory capabilities with a single line of code, storing memory data in standard SQL databases.
Core Features:
- Integrates with a single line of code via
memori.enable() - Memory data is stored in standard SQL databases (SQLite, PostgreSQL, MySQL), giving users full ownership and control
- AI can remember conversations, learn from interactions, and maintain context across multiple sessions
Why Choose Memori?
1. One-Line Code Integration
Supports OpenAI, Anthropic, LiteLLM, LangChain, and any LLM framework, making integration extremely simple.
2. SQL-Native Storage
- Portable, queryable, and auditable memory data
- Stored in a database you fully control
- No complex vector databases required
3. 80-90% Cost Savings
Eliminates the need for expensive vector databases, significantly reducing operational costs.
4. Zero Vendor Lock-in
Memories can be exported in SQLite format, allowing migration anywhere, anytime.
5. Intelligent Memory Management
- Automatic entity extraction
- Relationship mapping
- Context prioritization
Quick Start
Installation
pip install memorisdk
Basic Usage
from memori import Memori
from openai import OpenAI
# Initialize
memori = Memori(conscious_ingest=True)
memori.enable()
client = OpenAI()
# First conversation
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "I'm building a FastAPI project"}]
)
# Subsequent conversation - Memori automatically provides context
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "Help me add authentication"}]
)
# The LLM will automatically know about your FastAPI project information
Supported Databases
Memori supports any standard SQL database:
| Database | Connection String Example |
|---|---|
| SQLite | sqlite:///my_memory.db |
| PostgreSQL | postgresql://user:pass@localhost/memori |
| MySQL | mysql://user:pass@localhost/memori |
| Neon | postgresql://user:pass@ep-*.neon.tech/memori |
| Supabase | postgresql://postgres:pass@db.*.supabase.co/postgres |
Supported LLM Frameworks
Through LiteLLM's native callback system, Memori supports all major frameworks:
| Framework | Status | Usage |
|---|---|---|
| OpenAI | ✓ Native Support | from openai import OpenAI |
| Anthropic | ✓ Native Support | from anthropic import Anthropic |
| LiteLLM | ✓ Native Support | from litellm import completion |
| LangChain | ✓ Supported | Integrated via LiteLLM |
| Azure OpenAI | ✓ Supported | Configure using ProviderConfig.from_azure() |
| 100+ Models | ✓ Supported | Any LiteLLM-compatible provider |
Configuration Options
Database Configuration
from memori import Memori
memori = Memori(
database_connect="postgresql://user:pass@localhost/memori",
conscious_ingest=True, # Short-term working memory
auto_ingest=True, # Dynamic search on every query
openai_api_key="sk-..."
)
memori.enable()
Memory Modes
Conscious Mode - One-time working memory injection
memori = Memori(conscious_ingest=True)
Auto Mode - Dynamic search on every query
memori = Memori(auto_ingest=True)
Combined Mode - Get the best of both
memori = Memori(conscious_ingest=True, auto_ingest=True)
Environment Variable Configuration
from memori import Memori, ConfigManager
config = ConfigManager()
config.auto_load() # Load from environment variables or config file
memori = Memori()
memori.enable()
Set environment variables:
export MEMORI_DATABASE__CONNECTION_STRING="postgresql://..."
export MEMORI_AGENTS__OPENAI_API_KEY="sk-..."
export MEMORI_MEMORY__NAMESPACE="production"
How It Works
Memori works by intercepting LLM calls – injecting context before the call and logging information after:
Pre-call (Context Injection)
- Your application calls
client.chat.completions.create(messages=[...]) - Memori transparently intercepts the call
- A Retrieval Agent (Auto mode) or Conscious Agent (Conscious mode) retrieves relevant memories
- Context is injected into the messages before sending to the LLM provider
Post-call (Logging)
- The LLM provider returns a response
- A Memory Agent extracts entities, categorizes them (facts, preferences, skills, rules, context)
- The conversation is stored in the SQL database with full-text search indexing
- The original response is returned to your application
Background Processing (Every 6 hours)
- The Conscious Agent analyzes patterns, promoting important memories from long-term to short-term storage
Application Scenarios Examples
Basic Examples
- Basic Usage - Simple memory setup
- Personal Assistant - AI assistant with memory
- Memory Retrieval - Function calling
- Advanced Configuration - Production environment setup
Multi-user Scenarios
- Simple Multi-user - User memory isolation
- FastAPI Multi-user Application - REST API with Swagger
Framework Integration Examples
Memori provides integration examples with several popular AI frameworks:
- Agno
- AWS Strands
- Azure AI Foundry
- AutoGen
- CamelAI
- CrewAI
- Digital Ocean AI
- LangChain
- OpenAI Agent
- Swarms
Online Demos
- Personal Diary Assistant - Streamlit application available online
- Research Assistant Agent - Research tool available online
Technical Architecture
Memori adopts a layered architectural design:
- Interception Layer - Transparently intercepts LLM API calls
- Retrieval Layer - Intelligently retrieves relevant memory context
- Storage Layer - SQL database for persistent storage
- Analysis Layer - Background analysis and memory optimization
For detailed architectural documentation, please refer to architecture.md in the official documentation.
Enterprise Edition (Memori v3)
Memori is opening a small private testing group for its v3 Enterprise Edition. If you'd like to learn more and get early access to the new memory architecture for enterprise AI, you can join their testing program.
Community & Support
- Documentation: https://memorilabs.ai/docs
- Discord Community: https://discord.gg/abD4eGym6v
- GitHub Issues: https://github.com/GibsonAI/memori/issues
Contribution Guide
Memori welcomes community contributions! The project provides detailed contribution guidelines, including:
- Development environment setup
- Code style and standards
- Submitting Pull Requests
- Reporting issues
Open Source License
Apache 2.0 License
Summary
Memori is a powerful and easy-to-use AI memory solution, especially suitable for:
- Developers who need to add memory capabilities to LLM applications
- Teams building multi-session AI assistants
- Projects aiming to reduce vector database costs
- Enterprises that want full control over AI memory data
Through its SQL-native storage and one-line code integration design philosophy, Memori significantly lowers the barrier and cost of adding memory functionality to AI applications.