Add persistent memory to your AI agents with semantic search and automatic fact extraction. Built on PostgreSQL + pgvector.
from cluttr import Cluttr
memory = Cluttr(config)
async with memory:
# Store memories from conversations
await memory.add( messages, user_id="user_123")
# Search relevant memories
results = await memory.search("programming language preferences", user_id="user_123")Cluttr handles the complexity of memory management so you can focus on building great AI experiences.
LLM extracts important facts, preferences, and context from conversations automatically.
Queries are automatically optimized by LLM for better semantic vector matching.
Images in conversations are automatically summarized and stored as searchable memories.
LLM-powered deduplication prevents storing redundant or semantically similar facts.
Works with OpenAI (GPT-4o-mini) and AWS Bedrock (Claude + Titan embeddings).
Built on battle-tested PostgreSQL with pgvector for reliable vector similarity search.
Get up and running in minutes.
uv add cluttrconfig = {
"vector_db": {
"engine": "postgres",
"host": "localhost",
"database": "cluttr",
"user": "postgres",
"password": "secret",
},
"llm": {
"provider": "openai", # or "bedrock"
"api_key": "sk-...",
},
}memory = Cluttr(config)
async with memory:
# Add memories from conversation
await memory.add([
{"role": "user", "content": "I love Python!"},
{"role": "assistant", "content": "Great choice!"},
], user_id="user_123")
# Search memories
results = await memory.search(
"programming preferences",
user_id="user_123"
)