Content
<div align="center">
<h1 style="font-family: 'Bookman Old Style', serif;">
EverMemOS
<br>
<a href="https://everm.ai/" target="_blank">
<img src="figs/logo.png" alt="EverMemOS" height="34" />
</a>
</h1>
<p><strong>Driven by Understanding in Every Interaction</strong> · Enterprise-Grade Intelligent Memory System</p>
<p>
<img alt="Python" src="https://img.shields.io/badge/Python-3.10+-0084FF?style=flat-square&logo=python&logoColor=white" />
<img alt="License" src="https://img.shields.io/badge/License-Apache%202.0-00B894?style=flat-square&logo=apache&logoColor=white" />
<img alt="Docker" src="https://img.shields.io/badge/Docker-Supported-4A90E2?style=flat-square&logo=docker&logoColor=white" />
<img alt="FastAPI" src="https://img.shields.io/badge/FastAPI-Latest-26A69A?style=flat-square&logo=fastapi&logoColor=white" />
<img alt="MongoDB" src="https://img.shields.io/badge/MongoDB-7.0+-00C853?style=flat-square&logo=mongodb&logoColor=white" />
<img alt="Elasticsearch" src="https://img.shields.io/badge/Elasticsearch-8.x-0084FF?style=flat-square&logo=elasticsearch&logoColor=white" />
<img alt="Milvus" src="https://img.shields.io/badge/Milvus-2.4+-00A3E0?style=flat-square" />
<img alt="Redis" src="https://img.shields.io/badge/Redis-7.x-26A69A?style=flat-square&logo=redis&logoColor=white" />
<a href="https://github.com/EverMind-AI/EverMemOS/releases">
<img alt="Release" src="https://img.shields.io/badge/release-v1.1.0-4A90E2?style=flat-square" />
</a>
</p>
<p>
<a href="README.md">English</a> | <a href="README_zh.md">简体中文</a>
</p>
</div>
---
> 💬 **More than just memory, it's foresight.**
**EverMemOS** is a forward-looking **intelligent system**.
Traditional AI memory is merely a database for "reviewing the past," while EverMemOS enables AI to not only "remember" what happened but also "understand" the meaning of these memories and guide current actions and decisions accordingly. In the EverMemOS demo tool, you can see how EverMemOS extracts important information from your historical data and then remembers your **preferences, habits, and history** during conversations, just like a **friend** who truly knows you.
In the **LoCoMo** benchmark, our EverMemOS-based method achieved a **92.3% reasoning accuracy** under the **LLM-Judge** evaluation, outperforming similar methods we tested.
---
## 📢 Latest Updates
<table>
<tr>
<td width="100%" style="border: none;">
**[2025-11-27] 🎉 🎉 🎉 EverMemOS v1.1.0 Released!**
- 🔧 **vLLM Support**: Supports vLLM deployment for Embedding and Reranker models (currently customized for the Qwen3 series)
- 📊 **Evaluation Resources**: Complete results and code for LoCoMo, LongMemEval, and PersonaMem have been released
<br/>
**[2025-11-02] 🎉 🎉 🎉 EverMemOS v1.0.0 Released!**
- ✨ **Stable Version**: AI memory system officially open-sourced
- 📚 **Documentation Improved**: Provides a quick start guide and complete API documentation
- 📈 **Benchmark**: LoCoMo dataset benchmark testing process
- 🖥️ **Demo Tool**: Get started quickly with an easy-to-use demo
</td>
</tr>
</table>
---
## 🎯 Core Vision
Build an AI memory that never forgets, allowing every conversation to build upon prior understanding.
---
## 💡 Unique Advantages
<table>
<tr>
<td width="33%" valign="top">
<h3>🔗 Organized Context</h3>
<p><strong>More than "Fragments," Connect "Stories"</strong>: Automatically connect conversation fragments to build a clear thematic context, allowing AI to "understand clearly."</p>
<blockquote>
When faced with multi-threaded conversations, it can naturally distinguish between "progress discussions for Project A" and "strategy planning for Team B," and maintain coherent contextual logic within each topic.<br/><br/>
From scattered phrases to complete narratives, AI no longer "understands a sentence" but "understands the whole story."
</blockquote>
</td>
<td width="33%" valign="top">
<h3>🧠 Informed Perception</h3>
<p><strong>More than "Retrieval," Intelligent "Perception"</strong>: Proactively capture deep connections between memories and tasks, allowing AI to "think thoughtfully" at critical moments.</p>
<blockquote>
Imagine: When a user requests "food recommendations," AI proactively associates this with the key information that "you had dental surgery two days ago," automatically adjusting suggestions to avoid unsuitable options.<br/><br/>
This is <strong>Contextual Awareness</strong> - allowing AI's thinking to be truly based on understanding, rather than isolated responses.
</blockquote>
</td>
<td width="33%" valign="top">
<h3>💾 Dynamic Profile</h3>
<p><strong>More than "Archives," Dynamic "Growth"</strong>: Update user profiles in real-time, understanding you better with each conversation, allowing AI to "recognize you clearly."</p>
<blockquote>
Each of your interactions will subtly update AI's understanding of you - preferences, styles, and focus points are constantly evolving.<br/><br/>
As interactions deepen, it's not just "remembering what you said," but "learning who you are."
</blockquote>
</td>
</tr>
</table>
---
## 📑 Table of Contents
<div align="center">
<table>
<tr>
<td width="50%" valign="top">
- [📖 Project Introduction](#-project-introduction)
- [🎯 System Framework](#-system-framework)
- [📁 Project Structure](#-project-structure)
- [🚀 Quick Start](#-quick-start)
- [Environment Requirements](#environment-requirements)
- [Installation Steps](#installation-steps)
- [How to Use](#how-to-use)
- [More Details](#more-detailed-information)
</td>
<td width="50%" valign="top">
- [📚 Documentation](#-documentation)
- [Development Documentation](#development-documentation)
- [API Documentation](#api-documentation)
- [Core Framework](#core-framework)
- [🏗️ Architecture Design](#️-architecture-design)
- [🤝 Contribution](#-contribution)
- [🌟 Join Us](#-join-us)
- [🙏 Acknowledgments](#-acknowledgments)
</td>
</tr>
</table>
</div>
---
## 📖 Project Introduction
**EverMemOS** is an open-source project designed to provide long-term memory capabilities for conversational AI agents. It extracts, constructs, and retrieves information from conversations, enabling agents to maintain context, recall past interactions, and gradually build user profiles. This makes conversations more personalized, coherent, and intelligent.
> 📄 **Paper Coming Soon** - Our technical paper is in preparation, stay tuned!
## 🎯 System Framework
EverMemOS operates around two main lines: **Memory Construction** and **Memory Perception**. They form a cognitive closed-loop, enabling the system to continuously absorb, precipitate, and utilize past information, allowing each response to be based on real context and long-term memory.
<p align="center">
<img src="figs/overview.png" alt="Overview" />
</p>
### 🧩 Memory Construction
Memory Construction Layer: Constructs structured, retrievable long-term memory based on raw conversation data.
- **Core Elements**
- ⚛️ **MemCell**: The core memory structure unit extracted from conversations, facilitating subsequent organization and referencing
- 🗂️ **Multi-Level Memory**: Integrates related fragments by topic and context, forming reusable multi-level memories
- 🏷️ **Multi-Type Memory**: Covers plots, profiles, preferences, relationships, semantic knowledge, basic facts, and core memories
- **Workflow**
1. **MemCell Extraction**: Identifies key information in conversations and generates MemCells
2. **Structured Memory Construction**: Integrates by topic and participant to form plots and profiles
3. **Intelligent Storage Indexing**: Persistently saves and establishes keyword and semantic indexes to support rapid recall
### 🔎 Memory Perception
Memory Perception Layer: Quickly recalls relevant memories for queries, achieving accurate contextual awareness through multi-turn reasoning and intelligent fusion.
#### 🎯 Intelligent Retrieval Tools
- **🧪 Hybrid Retrieval (RRF Fusion)**
Executes semantic and keyword retrieval in parallel, seamlessly fusing using the Reciprocal Rank Fusion algorithm
- **📊 Intelligent Re-ranking (Reranker)**
Batch concurrent processing + exponential backoff retry, maintaining stability under high throughput
Re-ranks candidate memories by depth of relevance, prioritizing the most critical information
#### 🚀 Flexible Retrieval Strategies
- **⚡ Lightweight Fast Mode**
For latency-sensitive scenarios, skips LLM calls and directly uses keyword retrieval (BM25)
Achieves faster response speeds
- **🎓 Agentic Multi-Turn Recall**
For situations with insufficient information, generates 2-3 complementary queries, retrieves and fuses in parallel
Improves understanding coverage of complex intentions through multi-way RRF fusion
#### 🧠 Reasoning Fusion
- **Context Integration**: Concatenates recalled multi-level memories (plots, profiles, preferences) with the current conversation
- **Traceable Reasoning**: The model generates responses based on clear memory evidence, avoiding hallucinations
💡 Through the cognitive closed-loop of **"Structured Memory → Multi-Strategy Recall → Intelligent Retrieval → Contextual Reasoning"**, AI always "thinks with memory," achieving true contextual awareness.
## 📁 Project Structure
<details>
<summary>Expand/Collapse Directory Structure</summary>
```
memsys-opensource/
├── src/ # Source code directory
│ ├── agentic_layer/ # Agentic layer - unified memory interface
│ ├── memory_layer/ # Memory layer - memory extraction
│ │ ├── memcell_extractor/ # MemCell extractor
│ │ ├── memory_extractor/ # Memory extractor
│ │ └── prompts/ # LLM prompt templates
│ ├── retrieval_layer/ # Retrieval layer - memory retrieval
│ ├── biz_layer/ # Business layer - business logic
│ ├── infra_layer/ # Infrastructure layer
│ ├── core/ # Core functions (DI/lifecycle/middleware)
│ ├── component/ # Components (LLM adapters, etc.)
│ └── common_utils/ # Common utilities
├── demo/ # Demo code
├── data/ # Sample conversation data
├── evaluation/ # Evaluation scripts
│ └── src/ # Evaluation framework source code
├── data_format/ # Data format definitions
├── docs/ # Documentation
├── config.json # Configuration file
├── env.template # Environment variable template
├── pyproject.toml # Project configuration
└── README.md # Project description
```
</details>
## 🚀 Quick Start
### Environment Requirements
- Python 3.10+
- uv
- Docker 20.10+ and Docker Compose 2.0+
- **At least 4GB of available memory** (for Elasticsearch and Milvus)
### Installation Steps
#### Start Dependent Services Using Docker ⭐
Use Docker Compose to start all dependent services (MongoDB, Elasticsearch, Milvus, Redis) with one click:
```bash
# 1. Clone the project
git clone https://github.com/EverMind-AI/EverMemOS.git
cd EverMemOS
# 2. Start Docker services
docker-compose up -d
# 3. Verify service status
docker-compose ps
# 4. Install uv (if not already installed)
curl -LsSf https://astral.sh/uv/install.sh | sh
# 5. Install project dependencies
uv sync
# 6. Configure environment variables
cp env.template .env
# Edit the .env file and fill in the necessary configurations
# - LLM_API_KEY: Enter your LLM API Key (for memory extraction)
# - VECTORIZE_API_KEY: Enter your DeepInfra API Key (for Embedding and Rerank)
# For detailed configuration instructions, please refer to: [Configuration Guide](docs/usage/CONFIGURATION_GUIDE_zh.md)
```
**Docker Service Description**:
| Service | Host Port | Container Port | Purpose |
|------|-----------|---------|------|
| **MongoDB** | 27017 | 27017 | Main database, stores memory units and profiles |
| **Elasticsearch** | 19200 | 9200 | Keyword search engine (BM25) |
| **Milvus** | 19530 | 19530 | Vector database, semantic retrieval |
| **Redis** | 6379 | 6379 | Cache service |
> 💡 **Connection Tips**:
> - Use the **host port** when connecting (e.g., `localhost:19200` to access Elasticsearch)
> - MongoDB credentials: `admin` / `memsys123` (for local development only)
> - Stop services: `docker-compose down` | View logs: `docker-compose logs -f`
> 📖 Detailed MongoDB Installation Guide: [MongoDB Installation Guide](docs/usage/MONGODB_GUIDE_zh.md)
---
### How to Use
#### 🎯 Run Demo: Memory Extraction and Interactive Chat
The demo section showcases the end-to-end functionality of EverMemOS.
---
**🚀 Quick Start: Simple Demo (Recommended)** ⭐
The fastest way to experience EverMemOS! Just 2 steps to see the complete memory storage and retrieval process:
```bash
# Step 1: Start the API server (terminal 1)
uv run python src/run.py --port 8001
# Step 2: Run the simple demo (terminal 2)
uv run python src/bootstrap.py demo/simple_demo.py
```
**What it does:**
- Stores 4 conversation messages about sports hobbies
- Waits 10 seconds to build the index
- Searches for relevant memories with 3 different queries
- Shows the complete workflow and friendly instructions
**Suitable for:** First-time users, quick testing, understanding core concepts
View the demo code [`demo/simple_demo.py`](demo/simple_demo.py)
---
We have also set up complete experience scenarios:
**Prerequisite: Start the API Server**
```bash
# Terminal 1: Start the API server (required)
uv run python src/run.py --port 8001
```
> 💡 **Tip**: The API server needs to run continuously, please keep this terminal open. All operations below need to be performed in another terminal.
---
**Step 1: Extract Memory**
Run the memory extraction script to process sample conversation data and build the memory database:
```bash
# Terminal 2: Run the extraction script
uv run python src/bootstrap.py demo/extract_memory.py
```
This script will:
- Call `demo.tools.clear_all_data.clear_all_memories()` to ensure the demo starts from an empty MongoDB/Elasticsearch/Milvus/Redis state. Please ensure that the dependent services started by `docker-compose` are running before executing the script, otherwise the cleanup step will fail.
- Load `data/assistant_chat_zh.json`, add `scene="assistant"` to each message, and stream each record to `http://localhost:8001/api/v1/memories`. If you are hosting the API at a different endpoint or want to import different scenes, you can update the `base_url`, `data_file`, or `scene` constants in `demo/extract_memory.py`.
- Write only through the HTTP API: MemCells, plots, and profiles are created in the database instead of being saved in the `demo/memcell_outputs/` directory. You can check MongoDB (and Milvus/Elasticsearch) to verify data ingestion, or go directly to the chat demo.
> **💡 Tip**: For detailed configuration instructions and usage guides, please refer to the [Demo Documentation](demo/README_zh.md).
**Step 2: Chat with Memory**
After extracting the memory, start the interactive chat demo:
```bash
# Terminal 2: Run the Chat Program (Ensure the API Server is Still Running)
uv run python src/bootstrap.py demo/chat_with_memory.py
```
The program loads the `.env` file via `python-dotenv`, verifies that at least one LLM key (`LLM_API_KEY`, `OPENROUTER_API_KEY`, or `OPENAI_API_KEY`) is available, and connects to MongoDB via `demo.utils.ensure_mongo_beanie_ready` to enumerate groups that already contain MemCells. Each user query invokes `api/v1/memories/search` unless you explicitly choose Agentic mode, in which case the orchestrator switches to agentic retrieval and warns of additional LLM latency.
**Interaction Flow:**
1. **Select Language**: Choose a Chinese or English terminal interface.
2. **Select Scenario Mode**: Assistant mode (one-on-one) or group chat mode (multi-person analysis).
3. **Select Dialogue Group**: Real-time reading of groups from MongoDB via `query_all_groups_from_mongodb`; please run the extraction step first so that the list is non-empty.
4. **Select Retrieval Mode**: `rrf`, `embedding`, `bm25`, or LLM-guided Agentic retrieval.
5. **Start Chatting**: Ask questions, check the retrieved memories displayed before each response, and use `help`, `clear`, `reload`, or `exit` to manage the session.
---
#### 📊 Run Evaluation: Benchmarking
The evaluation framework provides a unified modular approach to benchmark memory systems on standard datasets (LoCoMo, LongMemEval, PersonaMem).
**Quick Tests (Smoke Tests)**:
```bash
# Test with limited data to verify everything is working
# Default: first conversation, first 10 messages, first 3 questions
uv run python -m evaluation.cli --dataset locomo --system evermemos --smoke
# Custom smoke test: 20 messages, 5 questions
uv run python -m evaluation.cli --dataset locomo --system evermemos \
--smoke --smoke-messages 20 --smoke-questions 5
# Test different datasets
uv run python -m evaluation.cli --dataset longmemeval --system evermemos --smoke
uv run python -m evaluation.cli --dataset personamem --system evermemos --smoke
# Test specific stages (e.g., only test search and answer stages)
uv run python -m evaluation.cli --dataset locomo --system evermemos \
--smoke --stages search answer
# Quickly view smoke test results
cat evaluation/results/locomo-evermemos-smoke/report.txt
```
**Full Evaluation**:
```bash
# Evaluate EvermemOS on the LoCoMo benchmark
uv run python -m evaluation.cli --dataset locomo --system evermemos
# Evaluate on other datasets
uv run python -m evaluation.cli --dataset longmemeval --system evermemos
uv run python -m evaluation.cli --dataset personamem --system evermemos
# Use --run-name to distinguish multiple runs (for A/B testing)
uv run python -m evaluation.cli --dataset locomo --system evermemos --run-name baseline
uv run python -m evaluation.cli --dataset locomo --system evermemos --run-name experiment1
# Resume from checkpoint if interrupted (automatic)
# Just rerun the same command - it will detect and resume from the checkpoint
uv run python -m evaluation.cli --dataset locomo --system evermemos
```
**View Results**:
```bash
# Results are saved to evaluation/results/{dataset}-{system}[-{run-name}]/
cat evaluation/results/locomo-evermemos/report.txt # Summary metrics
cat evaluation/results/locomo-evermemos/eval_results.json # Detailed results per question
cat evaluation/results/locomo-evermemos/pipeline.log # Execution log
```
The evaluation process includes 4 stages (add → search → answer → evaluate), supporting automatic checkpointing and recovery.
> **⚙️ Evaluation Configuration**:
> - **Data Preparation**: Datasets need to be placed in `evaluation/data/` (see `evaluation/README.md`)
> - **Environment Configuration**: Configure LLM API keys in `.env` (see `env.template`)
> - **Install Dependencies**: Run `uv sync --group evaluation` to install dependencies
> - **Custom Configuration**: Copy and modify YAML files in `evaluation/config/systems/` or `evaluation/config/datasets/`
> - **Advanced Usage**: See `evaluation/README.md` for checkpoint management, specific stage runs, and system comparisons
---
#### 🔌 Call API Interface
**Prerequisite: Start the API Server**
Before calling the API, ensure the API server is started:
```bash
# Start the API server
uv run python src/run.py --port 8001
```
> 💡 **Tip**: The API server needs to be running continuously; keep this terminal open. The API calls below need to be made in another terminal.
---
Use the Memory API to store single message memories:
<details>
<summary>Example: Store a Single Message</summary>
```bash
curl -X POST http://localhost:8001/api/v1/memories \
-H "Content-Type: application/json" \
-d '{
"message_id": "msg_001",
"create_time": "2025-02-01T10:00:00+00:00",
"sender": "user_103",
"sender_name": "Chen",
"content": "我们需要在本周完成产品设计",
"group_id": "group_001",
"group_name": "项目讨论组",
"scene": "group_chat"
}'
```
</details>
> ℹ️ The `scene` is a required field, supporting only `assistant` or `group_chat`, used to specify the memory extraction strategy.
> ℹ️ Currently, all memory type extraction and storage are enabled by default.
**API Functionality**:
- **`POST /api/v1/memories`**: Store a single message memory
- **`GET /api/v1/memories/search`**: Memory retrieval (supports keyword/vector/hybrid retrieval modes)
For more API details, refer to the [Memory API Documentation](docs/api_docs/memory_api_zh.md).
---
**🔍 Retrieve Memories**
EverMemOS provides two retrieval modes: **Lightweight Retrieval** (fast) and **Agentic Retrieval** (intelligent).
**Lightweight Retrieval**
| Parameter | Required | Description |
|------|------|------|
| `query` | Yes* | Natural language query (*optional for profile data source) |
| `user_id` | No | User ID |
| `data_source` | Yes | `episode` / `event_log` / `foresight` / `profile` |
| `memory_scope` | Yes | `personal` (user_id only) / `group` (group_id only) / `all` (both) |
| `retrieval_mode` | Yes | `embedding` / `bm25` / `rrf` (recommended) |
| `group_id` | No | Group ID |
| `current_time` | No | Filter foresight within the validity period (format: YYYY-MM-DD) |
| `top_k` | No | Number of results to return (default: 5) |
**Example 1: Personal Memory**
<details>
<summary>Example: Personal Memory Retrieval</summary>
```bash
curl -X GET http://localhost:8001/api/v1/memories/search \
-H "Content-Type: application/json" \
-d '{
"query": "用户喜欢什么运动",
"user_id": "user_001",
"data_source": "episode",
"memory_scope": "personal",
"retrieval_mode": "rrf"
}'
```
</details>
**Example 2: Group Memory**
<details>
<summary>Example: Group Memory Retrieval</summary>
```bash
curl -X GET http://localhost:8001/api/v1/memories/search \
-H "Content-Type: application/json" \
-d '{
"query": "讨论项目进展",
"group_id": "project_team_001",
"data_source": "episode",
"memory_scope": "group",
"retrieval_mode": "rrf"
}'
```
</details>
> 📖 Complete Documentation: [Memory API](docs/api_docs/memory_api_zh.md) | Testing Tool: `demo/tools/test_retrieval_comprehensive.py`
---
#### 📦 Batch Storage of Group Chat Memories
EverMemOS supports a standardized group chat data format ([GroupChatFormat](data_format/group_chat/group_chat_format.md)), which can be batch stored using a script:
```bash
# Batch store using a script (Chinese data)
uv run python src/bootstrap.py src/run_memorize.py \
--input data/group_chat_zh.json \
--api-url http://localhost:8001/api/v1/memories \
--scene group_chat
# Or use English data
uv run python src/bootstrap.py src/run_memorize.py \
--input data/group_chat_en.json \
--api-url http://localhost:8001/api/v1/memories \
--scene group_chat
# Validate file format
uv run python src/bootstrap.py src/run_memorize.py \
--input data/group_chat_zh.json \
--scene group_chat \
--validate-only
```
> ℹ️ **Scene Parameter Description**: The `scene` is a required field, used to specify the memory extraction strategy:
> - Use `assistant` for one-on-one assistant conversations
> - Use `group_chat` for multi-person group discussions
>
> **Note**: In the data files, you may see `scene` values of `work` or `company` - these are internal scene descriptors in the data format. The command-line argument `--scene` uses different values (`assistant`/`group_chat`) to specify which extraction pipeline to apply.
**GroupChatFormat Example**:
```json
{
"version": "1.0.0",
"conversation_meta": {
"group_id": "group_001",
"name": "项目讨论组",
"user_details": {
"user_101": {
"full_name": "Alice",
"role": "产品经理"
}
}
},
"conversation_list": [
{
"message_id": "msg_001",
"create_time": "2025-02-01T10:00:00+00:00",
"sender": "user_101",
"content": "大家早上好"
}
]
}
```
For a complete format description, refer to [Group Chat Format Specification](data_format/group_chat/group_chat_format.md).
### More Details
For detailed installation, configuration, and usage instructions, please refer to:
- 📚 [Quick Start Guide](docs/dev_docs/getting_started.md) - Complete installation and configuration steps
- ⚙️ [Configuration Guide](docs/usage/CONFIGURATION_GUIDE_zh.md) - Detailed explanation of environment variables and service configuration
- 📖 [API Usage Guide](docs/dev_docs/api_usage_guide.md) - Detailed explanation of API interfaces and data formats
- 🔧 [Development Guide](docs/dev_docs/development_guide.md) - Architecture design and best development practices
- 🚀 [Bootstrap Usage](docs/dev_docs/bootstrap_usage.md) - Instructions for using the script runner
- 📝 [Group Chat Format Specification](data_format/group_chat/group_chat_format.md) - Standardized data format
## 📚 Documentation
### Development Documentation
- [Quick Start Guide](docs/dev_docs/getting_started.md) - Installation, configuration, and startup
- [Development Guide](docs/dev_docs/development_guide.md) - Architecture design and best practices
- [Bootstrap Usage](docs/dev_docs/bootstrap_usage.md) - Script runner
### API Documentation
- [Memory API](docs/api_docs/memory_api_zh.md) - Memory management API
### Core Framework
- [Dependency Injection Framework](src/core/di/README.md) - DI container usage guide
### Demos and Evaluation
- [📖 Demo Guide](demo/README_zh.md) - Interactive examples and memory extraction demos
- [📊 Data Guide](data/README_zh.md) - Sample conversation data and format specifications
- [📊 Evaluation Guide](evaluation/README_zh.md) - Testing EverMemOS-based methods on standard benchmarks
## 🏗️ Architecture Design
EverMemOS adopts a layered architecture design, mainly including:
- **Agentic Layer**: Memory extraction, vectorization, retrieval, and re-ranking
- **Memory Layer**: Memory cell extraction, episodic memory management
- **Retrieval Layer**: Multi-modal retrieval and result ranking
- **Biz Layer**: Business logic and data operations
- **Infra Layer**: Database, cache, message queue, and other adapters
- **Core Framework**: Dependency injection, middleware, queue management, etc.
For more architectural details, refer to the [Development Guide](docs/dev_docs/development_guide.md).
## 🤝 Contribution
We welcome contributions of all forms! Whether it's reporting bugs, suggesting new features, or submitting code improvements, it's greatly appreciated.
Before you start, please read our [Contribution Guide](CONTRIBUTING.md) for a quick overview of the development environment, code conventions, Git submission process, and Pull Request requirements.
## 🌟 Join Us
<!--
This section can add:
- Community communication methods (Discord, Slack, WeChat groups, etc.)
- Technical discussion forums
- Regular meeting information
- Contact email
-->
We are building a vibrant open-source community!
### Contact Information
<p>
<a href="https://github.com/EverMind-AI/EverMemOS/issues"><img alt="GitHub Issues" src="https://img.shields.io/badge/GitHub-Issues-blue?style=flat-square&logo=github"></a>
<a href="https://github.com/EverMind-AI/EverMemOS/discussions"><img alt="GitHub Discussions" src="https://img.shields.io/badge/GitHub-Discussions-blue?style=flat-square&logo=github"></a>
<a href="mailto:evermind@shanda.com"><img alt="Email" src="https://img.shields.io/badge/Email-联系我们-blue?style=flat-square&logo=gmail"></a>
<a href="https://www.reddit.com/r/EverMindAI/"><img alt="Reddit" src="https://img.shields.io/badge/Reddit-r/EverMindAI-orange?style=flat-square&logo=reddit"></a>
<a href="https://x.com/EverMindAI"><img alt="X" src="https://img.shields.io/badge/X-@EverMindAI-black?style=flat-square&logo=x"></a>
</p>
### Contributors
Thank you to all the developers who have contributed to this project!
<!-- Can be automatically generated using GitHub Contributors -->
<!-- <a href="https://github.com/your-org/memsys_opensource/graphs/contributors">
<img src="https://contrib.rocks/image?repo=your-org/memsys_opensource" />
</a> -->
## 📖 Citation
If you use EverMemOS in your research, please cite our paper (coming soon):
```
Coming soon
```
## 📄 License
This project is licensed under the [Apache 2.0 License](LICENSE). This means you are free to use, modify, and distribute this project, subject to the following key conditions:
- You must include a copy of the Apache 2.0 license
- You must state significant changes made to the code
- You must retain all copyright, patent, trademark, and attribution notices
- If a NOTICE file is included, it must be included in the distribution
## 🙏 Acknowledgements
<!--
此部分可以添加:
- 受启发的项目
- 使用的开源库
- 支持的组织或个人
-->
Thanks to the following projects and communities for their inspiration and support:
- [Memos](https://github.com/usememos/memos) - Thanks to the Memos project for providing a complete and standardized open source note service, providing valuable inspiration for our memory system design.
- [Nemori](https://github.com/nemori-ai/nemori) - Thanks to the Nemori project for providing a self-organizing long-term memory system for agent LLM workflows, providing valuable inspiration for our memory system design.
---
<div align="center">
**If this project helps you, please give us a ⭐️**
Made with ❤️ by the EverMemOS Team
</div>
Connection Info
You Might Also Like
markitdown
MarkItDown-MCP is a lightweight server for converting URIs to Markdown.
firecrawl
Firecrawl MCP Server enables web scraping, crawling, and content extraction.
Time
A Model Context Protocol server for time and timezone conversions.
mcp-grafana
MCP server for Grafana, providing access to Grafana instances and ecosystem.
markdownify-mcp
Markdownify MCP Server converts various file types and web content to Markdown.
mcporter
MCPorter: A TypeScript toolkit for calling MCPs via CLI.