Mọi AI khác quên sạch sau mỗi session

MemoryAI thì không

Cho AI một bộ não thật

Không phải database. Không phải vector store. Không phải RAG.
Một bộ não biết nhớ, biết quên, biết liên kết, biết ngủ, biết bảo vệ chính nó.

brain.behaviors — 5 hành vi sinh học cốt lõi + 2 nâng cao
  • NHỚ lưu preferences, decisions, identity — không bao giờ mất
  • QUÊN rác tự biến mất theo đường cong Ebbinghaus
  • LIÊN KẾT "TypeScript" nhắc tới cùng "React" 10 lần → liên kết mạnh lên
  • NGỦ mỗi 6h, nén raw memories thành insights, cross-check DNA
  • BẢO VỆ 4 lớp Shield chống noise, spam, hallucination
  • THÍCH NGHI mỗi user 1 brain khác nhau, tự optimize theo cách dùng
  • THỜI GIAN "chuyển Sài Gòn" → tự expire "sống Hà Nội"
Bắt đầu miễn phí → Xem cách hoạt động Một API call. Mọi AI dùng được. 5 phút cài đặt.
terminal — memoryai
$ pip install hmc-memory
$ pip show hmc-memory
  Name: hmc-memory  Version: 0.9.0
  Summary: Persistent memory for AI agents
  Home: https://memoryai.dev

$ npm install memoryai
  added 12 packages in 800ms
132
API Endpoints
37
Engine Modules
<50ms
Recall Speed (p95)
99.9%
Uptime

Trusted Partners & Infrastructure

Everything your agent needs

Your AI will never forget again.

CORE

Store

Save memories — auto-embeds, auto-extracts entities/deduplication. One line of code.

CORE

Recall

Multiple scoring sources combined for highly relevant results. Under 50ms.

NEURAL

Brain Engine

Memories form connections over time, strengthening when recalled together and naturally fading when unused.

AUTO

Auto-Extract

Feed raw conversation. System extracts facts, entities, preferences automatically. Every 30s.

V6

Context Guard

Automatic session protection that monitors and optimizes context windows. Works with every AI.

NIGHTLY

Auto-Dream

Nightly background processing optimizes memory, merges duplicates, and creates meaningful summaries.

Three steps to agent memory

Install, call the SDK, the agent remembers.

01

Install the SDK

pip install hmc-memory or npm install memoryai. Works with Python and Node.

02

Store memories

Send raw text to mem.store(). MemoryAI handles embedding, deduplication, entity extraction, and neural brain learning automatically.

03

Recall when needed

Call mem.recall("query") with natural language. Get relevant memories back sorted by vector similarity, brain score, full-text match, and recency. Under 50ms.

Five layers, one pipeline

Every API call flows through five layers. No setup, no config.

  • 01

    store & recall

    Content pool dedup → batch embedding → auto-extract → auto-link.

  • 02

    shield + immune

    Quality filter, LLM judge, behavior analysis. Block bad data.

  • 03

    neural brain engine

    Intelligent connections, natural decay.

  • 04

    context guard v6

    Session protection with anti-loop rate limiting. Works with every LLM.

  • 05

    Recall

    Multiple scoring sources combined for highly relevant results. Under 50ms.

store & recall L2/L3
neural brain engine graph
shield + immune LLM
context guard v6 guard

Why choose MemoryAI?

Unlike basic storage solutions or building it yourself, MemoryAI gives you a complete, battle-tested memory system.

Feature MemoryAI Basic Storage DIY Solutions
Auto-deduplication ~
Smart memory prioritization
Auto entity extraction ~
Context Guard (anti-loop)
Nightly consolidation
Quality filter system
Multi-LLM support Vendor locked

Set up for your IDE

One command for any IDE. Copy the config and you're done.

01 Install MCP server with npx:
$ npx memoryai-mcp
02 Open Cursor Settings → Features → MCP Servers → Add new MCP server.
// ~/.cursor/mcp.json
{
  "mcpServers": {
    "memoryai": {
      "command": "npx",
      "args": ["-y", "memoryai-mcp"],
      "env": {
        "HM_ENDPOINT": "https://memoryai.dev",
        "HM_API_KEY": "YOUR_KEY"
      }
    }
  }
}
03 Restart IDE. Try asking:
"Remember that I prefer TypeScript"
"What do you remember about my project?"
01 Install MCP server with npx:
$ npx memoryai-mcp
02 Create or edit ~/.config/Code/User/globalStorage/rooveterinaryinc.roo-cline/settings/cline_mcp_settings.json:
{
  "mcpServers": {
    "memoryai": {
      "command": "npx",
      "args": ["-y", "memoryai-mcp"],
      "env": {
        "HM_ENDPOINT": "https://memoryai.dev",
        "HM_API_KEY": "YOUR_KEY"
      }
    }
  }
}
03 Restart IDE. Try asking:
"Remember that I prefer TypeScript"
"What do you remember about my project?"
01 Install MCP server with npx:
$ npx memoryai-mcp
02 Edit Claude Desktop config at ~/Library/Application Support/Claude/claude_desktop_config.json (Mac) or %APPDATA%\Claude\claude_desktop_config.json (Windows):
{
  "mcpServers": {
    "memoryai": {
      "command": "npx",
      "args": ["-y", "memoryai-mcp"],
      "env": {
        "HM_ENDPOINT": "https://memoryai.dev",
        "HM_API_KEY": "YOUR_KEY"
      }
    }
  }
}
03 Restart IDE. Try asking:
"Remember that I prefer TypeScript"
"What do you remember about my project?"
01 Install MCP server with npx:
$ npx memoryai-mcp
02 Open Windsurf Settings → Cascade → MCP → Add new server. Or create ~/.codeium/windsurf/mcp_config.json:
{
  "mcpServers": {
    "memoryai": {
      "command": "npx",
      "args": ["-y", "memoryai-mcp"],
      "env": {
        "HM_ENDPOINT": "https://memoryai.dev",
        "HM_API_KEY": "YOUR_KEY"
      }
    }
  }
}
03 Restart IDE. Try asking:
"Remember that I prefer TypeScript"
"What do you remember about my project?"
01 Install MCP server with npx:
$ npx memoryai-mcp
02 Open Kiro Settings → MCP → Add server:
command: npx
args: [-y, memoryai-mcp]
env:
  HM_ENDPOINT: https://memoryai.dev
  HM_API_KEY: YOUR_KEY
03 Restart IDE. Try asking:
"Remember that I prefer TypeScript"
"What do you remember about my project?"
01 Install MemoryAI skill from ClaWHub:
$ clawhub install memoryai
02 Add your API key to config:
# ~/.openclaw/workspace/skills/memoryai/config.json
{
  "endpoint": "https://memoryai.dev",
  "api_key": "YOUR_KEY"
}
03 Context Guard v6 now monitors every session automatically:
"Store this conversation context"
"Compact my memory before switching sessions"
01 Install with pip:
$ pip install hmc-memory
02 Use in your Python project:
from memoryai import MemoryAI

mem = MemoryAI(api_key="hm_sk_...")

# Store a memory
mem.store("User prefers TypeScript", tags=["preference"])

# Recall memories
results = mem.recall("user preferences", depth="deep")
for r in results:
    print(r.content, r.score)

Start free. Scale when ready.

Transparent pricing. No hidden fees. Cancel anytime.

Free
$0
Forever free. No credit card.
  • 500 chunks
  • Vector search
  • 132 endpoints
  • 100 stores/day
  • 14d memory retention
Get Started
Enterprise
$499+
Custom limits. Dedicated support.
  • Everything in ProMax
  • Unlimited everything
  • Dedicated support
  • Custom SLA
  • ∞ memory retention
Contact Sales

Works with every LLM

Vendor-neutral. Bring your own LLM. We handle the memory.

OpenAI
Anthropic
Google
LangChain
LlamaIndex
CrewAI
Ollama
Llama
Mistral
Cursor
VS Code
Windsurf

Common questions

Everything you need to know.

How is this different from a vector database?
A vector DB only stores embeddings. MemoryAI adds auto-deduplication, entity extraction, intelligent memory learning, contradiction detection, context guard, and nightly auto-dream consolidation. You just call mem.store() — we handle the rest.
What happens when I exceed my plan limits?
On Free plan, stores over 100/day will fail with a 429 error. Your memories are safe — we never delete active data. On paid plans, you get email alerts at 80% usage. Enterprise has auto-scaling.
Where is data stored? Is it GDPR compliant?
All data stored in Vietnam (HCMC) on PostgreSQL with encryption at rest. Transit encrypted via TLS 1.3 through Cloudflare. Compliant with PDPA Vietnam (Decree 13/2023). Export and delete data anytime via API.
Does this add latency to my agent?
Store: ~50-100ms (includes embedding). Recall: <50ms (p95 at 3K chunks). The SDK is non-blocking — your agent continues working while memory operations run in parallel.
Can I self-host MemoryAI?
Currently cloud-hosted only. Self-hosting is on the roadmap for Enterprise plans. Contact us for early access.
What happens if I cancel my subscription?
Your data is retained for 30 days after cancellation. You can export everything via GET /v1/export. After 30 days, unused tier drops to Free plan limits.

📊 Dashboard

-
Chunks
Free
Plan
-
Stores Today
-
Recalls Today

Build agents that remember

Drop in memory with one SDK call. Your AI will never forget again.