Content
# 📕 Xiaohongshu Creator MCP Toolkit
[](LICENSE)
[]((https://kayin-1253854796.cos.ap-shanghai.myqcloud.com/ownmedia/20250622023225261.jpg?imageSlim))
A powerful automation toolkit for Xiaohongshu, supporting integration with AI clients (such as Claude Desktop) via the MCP protocol, enabling content creation, publishing, and creator data analysis through conversations with AI.
## ✨ Main Features
- 🍪 **Cookie Management**: Securely obtain, validate, and manage Xiaohongshu login credentials
- 🤖 **MCP Protocol Support**: Seamless integration with AI clients like Claude Desktop and CherryStudio
- 📝 **Automated Publishing**: Supports automated publishing of image-text and video notes
- 🖼️ **Diverse Image Support**: Supports local images and web URLs
- ⏰ **Scheduled Tasks**: Supports scheduled data collection with cron expressions
- 📊 **Data Collection**: Automatically collects data from the Creator Center dashboard, content analysis, and fan data
- 🧠 **AI Data Analysis**: Chinese header data that AI can directly understand and analyze
- 💾 **Data Storage**: Supports local storage in CSV format (SQL is currently reserved and not under development)
- 🎯 **Unified Interface**: A single tool to meet the automation needs of LLM operations on Xiaohongshu
## 📋 Feature List
### Login
- [x] **Login** - Supports traditional command line login and logging in through conversation with AI
### Content Publishing
- [x] **Image and Text Publishing** - Supports publishing image and text notes
- [x] **Video Publishing** - Supports publishing video notes
- [x] **Topic Tags** - Supports automatic addition of topic tags to enhance content exposure
- [ ] **Content Search** - Supports specified search (in development plan)
### Data Collection
- [x] **Dashboard Data** - Collect account overview data (number of followers, number of likes, etc.)
- [x] **Content Analysis Data** - Collect note performance data (views, number of likes, etc.)
- [x] **Follower Data** - Collect follower growth and analysis data
- [x] **Scheduled Collection** - Support for automatic scheduled collection with cron expressions
- [x] **Data Storage** - Local storage in CSV format (default)
## 📋 Environment Requirements
### 🌐 Browser Environment
- **Google Chrome Browser** (latest version recommended)
- **ChromeDriver** (version must match the Chrome version exactly)
### 🔍 Check Chrome Version
Visit in the Chrome browser: `chrome://version/`
<!--  -->

### 📥 ChromeDriver Installation Method
#### Method 1: Automatic Download (Recommended)
```bash
# Using webdriver-manager for Automatic Management
pip install webdriver-manager
```
#### Method 2: Manual Download
1. 📋 Visit the official download page: [Chrome for Testing](https://googlechromelabs.github.io/chrome-for-testing/)
2. 🎯 Select the ChromeDriver that exactly matches your Chrome version
3. 📁 After downloading, extract it to an appropriate location (e.g., `/usr/local/bin/` or `C:\tools\`)
4. ⚙️ Configure the correct path in the `.env` file
#### Method 3: Package Manager Installation
```bash
# macOS (Homebrew)
brew install --cask chromedriver
# Windows (Chocolatey)
choco install chromedriver
# Linux (Ubuntu/Debian)
sudo apt-get install chromium-chromedriver
```
> ⚠️ **Important Note**: Version mismatch is the most common cause of issues. Please ensure that the ChromeDriver version matches the Chrome browser version exactly!
### 🌐 Remote Browser Connection
Supports connecting to a running remote Chrome instance, enhancing performance and supporting remote deployment scenarios.
#### 🔧 Configuration Method
Add the following configuration to the `.env` file:
```bash
# Enable Remote Browser Connection
ENABLE_REMOTE_BROWSER=true
REMOTE_BROWSER_HOST=http://xx.xx.xx.xx
REMOTE_BROWSER_PORT=xxxx
```
#### 🚀 Start Remote Chrome
- If you encounter a permission error, please check if the `./chrome-data` directory exists and verify if you have read and write permissions. If you do not have read and write permissions, please follow the steps below to fix it:
1. Run `docker run --rm selenium/standalone-chrome id seluser` to get the uid of seluser, for example, it returns `uid=1200(seluser) gid=1200(seluser) groups=1200(seluser)`
2. Run `sudo chown -R 1200:1200 ./chrome-data` to grant read and write permissions to seluser, where 1200 is the uid of seluser.
3. Re-run `docker-compose up --force-recreate` to start the container.
```bash
version: '3.8'
services:
selenium-chrome:
image: selenium/standalone-chrome:latest
container_name: selenium-chrome
ports:
- "54444:4444"
- "57900:7900"
shm_size: 2g
environment:
- SE_VNC_NO_PASSWORD=1
volumes:
- ./chrome-data:/home/seluser # Change mount path to ensure permissions
restart: unless-stopped
command: >
bash -c "mkdir -p /home/seluser/.config/google-chrome &&
touch /home/seluser/.config/google-chrome/test.txt &&
/opt/bin/entry_point.sh"
```
#### 💡 Use Cases
- **Remote Deployment**: Run Chrome on a server, connect locally
- **Performance Optimization**: Reuse already running Chrome instances to avoid repeated startups
- **Development Debugging**: Connect to an already logged-in Chrome instance to maintain session state
- **Docker Environment**: Share Chrome instances between containers
#### ⚠️ Notes
- A new Chrome instance will not be launched during remote connections.
- Ensure that the target Chrome instance has the remote debugging feature enabled.
- Certain operations (such as resizing the window) may not be supported in remote mode.
## 🚀 Quick Start
### 💡 Minimal Usage
```bash
# Clone the Project
git clone https://github.com/aki66938/xhs-toolkit.git
cd xhs-toolkit
# Run (Dependencies will be installed automatically)
./xhs # Mac/Linux
xhs.bat # Windows
# Or use Python
```bash
python install_deps.py # Dependency installation wizard
./xhs # Start the program
```
### 🎮 Interactive Menu
After running `./xhs`, a friendly menu interface will be displayed:
```
╭─────────────────────────────────────────╮
│ Xiaohongshu MCP Toolkit v1.3.0 │
│ Quick Operation Menu System │
╰─────────────────────────────────────────╯
【Main Menu】
1. 🔄 Data Collection
2. 🌐 Browser Operations
3. 📊 Data Management
4. 🍪 Cookie Management
5. 🚀 MCP Server
6. ⚙️ System Tools
0. Exit
```
### 🛠️ Running from Source
#### Method 1: uv (Recommended ⚡)
```bash
# Clone the Project
git clone https://github.com/aki66938/xhs-toolkit.git
cd xhs-toolkit
# Install dependencies and run using uv
uv sync
uv run python xhs_toolkit.py status ## Verify if the tool is available
```
> 💡 **uv Usage Tip**: All `python` commands in the documentation can be replaced with `uv run python` for a faster dependency management experience!
#### Method 2: pip (Traditional Method)
```bash
# Clone the Project
git clone https://github.com/aki66938/xhs-toolkit.git
cd xhs-toolkit
# Create a Virtual Environment (Recommended)
python -m venv venv
source venv/bin/activate # Windows: venv\Scripts\activate
# Install Dependencies
pip install -r requirements.txt
python xhs_toolkit.py status ## Verify if the tool is available
```
## 🛠️ User Guide
### 1. Create Configuration File
Copy and edit the configuration file:
```bash
cp env_example .env
vim .env # Edit configuration
```
**Required Configuration**:
```bash
# Chrome Browser Path
CHROME_PATH="/Applications/Google Chrome.app/Contents/MacOS/Google Chrome"
# ChromeDriver Path
WEBDRIVER_CHROME_DRIVER="/opt/homebrew/bin/chromedriver"
### 2. Obtain Login Credentials
```bash
# Method 1: Using Interactive Menu
./xhs
# Select 4 -> Cookie Management -> 1 -> Get New Cookies
# Method 2: Direct Command
./xhs cookie save
```
In the popped-up browser, if it is a connected remote browser, you can access the VNC interface at http://ip:57900, and then follow the steps below:
1. Log in to the 小红书 (Xiaohongshu) Creator Center
2. Ensure that you can access the features of the Creator Center normally
3. After completing, press the Enter key to save
### 3. Starting the MCP Server
```bash
```
# Method 1: Using Interactive Menu
./xhs
# Select 5 -> MCP Server -> 1 -> Start Server
# Method 2: Direct Command
./xhs server start
```
### 4. Client Configuration
**Claude Desktop**
#### Using uv (Recommended)
Add the following to `~/Library/Application Support/Claude/claude_desktop_config.json`:
```json
{
"mcpServers": {
"xhs-toolkit": {
"command": "uv",
"args": [
"--directory",
"/path/to/xhs-toolkit",
"run",
"python",
"-m",
"src.server.mcp_server",
"--stdio"
]
}
}
}
```
#### Using System Python
If not using uv, it can be configured as follows:
```json
{
"mcpServers": {
"xhs-toolkit": {
"command": "python3",
"args": [
"-m",
"src.server.mcp_server",
"--stdio"
],
"cwd": "/path/to/xhs-toolkit",
"env": {
"PYTHONPATH": "/path/to/xhs-toolkit"
}
}
}
}
```
**Note**:
- You need to replace `/path/to/xhs-toolkit` with the actual project path.
- macOS users configuration file location: `~/Library/Application Support/Claude/claude_desktop_config.json`
- Windows users configuration file location: `%APPDATA%\Claude\claude_desktop_config.json`
- After modifying the configuration, you need to restart Claude Desktop.
**cherry studio**
Add to the MCP configuration

**n8n**
Add the configuration in the tool of the AI agent node in n8n

## 🔧 Main Features
### MCP Tool List
| Tool Name | Function Description | Parameters | Remarks |
|-----------|---------------------|------------|--------|
| `test_connection` | Test MCP connection | None | Connection status check |
| `smart_publish_note` | Publish Xiaohongshu note ⚡ | title, content, images, videos, tags, topics | Supports local paths, network URLs, and topic tags |
| `check_task_status` | Check the status of the publishing task | task_id | View task progress |
| `get_task_result` | Get the results of completed tasks | task_id | Retrieve final publishing results |
| `login_xiaohongshu` | Smart login to Xiaohongshu | force_relogin, quick_mode | MCP dedicated non-interactive login |
| `get_creator_data_analysis` | Get creator data for analysis | None | For AI data analysis only |
### 💬 AI Conversational Operation Guide
You can complete operations such as logging in, posting, and data analysis by conversing with the AI, without the need to learn complex commands.
#### 🔐 Smart Login
```
User: "Log in to Xiaohongshu"
```
**Important Note**:
- 🚨 Please do not change the `headless` parameter on your first use; change it to headless mode only after obtaining the cookies.
- 🌐 After the AI calls the login tool, it will launch the browser. The first login requires manual input of the verification code or scanning the QR code.
- 🍪 Upon successful login, cookies will be automatically saved locally, so you won't need to log in again next time.
#### 📝 Content Publishing
**Image and Text Post (Local Image)**:
```
Please publish a Xiaohongshu note with the title: "Today's Share", content: "...", image path: "/User/me/xhs/poster.png"
```
**Image and Text Post (Online Image)**:
```
Please publish a Xiaohongshu note with the title: "Food Share", content: "Today's food", using this online image: https://example.com/food.jpg
```
**Video Post**:
```
Please publish a Xiaohongshu video with the title: "Today's Vlog", content: "...", video path: "/User/me/xhs/video.mp4"
```
**Post with Hashtags**:
```
Please publish a Xiaohongshu note with the title: "AI Learning Insights", content: "Today I learned the basics of machine learning", hashtags: "AI, Artificial Intelligence, Learning Insights", image: "/path/to/image.jpg"
```
#### 📊 Data Analysis
```
Please analyze my Xiaohongshu account data and provide content optimization suggestions.
```
#### 🔧 Release Principle
During the manual upload process, the browser will prompt the user to select a file path. The AI will pass the path parameter provided by the user to the MCP tool, automatically completing the upload action.
#### ⚡ Intelligent Waiting Mechanism
- **📷 Image Upload**: Quick upload, no waiting required
- **🎬 Video Upload**: Polling to check upload progress, waiting for the "Upload Successful" indicator to appear
- **⏱️ Timeout Protection**: Maximum wait time of 2 minutes to avoid MCP (Microcontroller Protocol) call timeout
- **📊 Status Monitoring**: DEBUG mode displays video file size and duration information
- **🔄 Efficient Polling**: Checks every 2 seconds with precise text matching
### 📊 Data Collection and AI Analysis Features
Automatically collect data from Xiaohongshu creators, supporting scheduled tasks and AI intelligent analysis.
#### 🧠 AI Data Analysis Features
- **Chinese Headers**: CSV files use Chinese headers, allowing AI to directly understand the meaning of the data.
- **Intelligent Analysis**: Obtain complete data through the `get_creator_data_analysis` MCP tool.
- **Data-Driven**: AI provides content optimization suggestions based on real data.
- **Trend Analysis**: Analyze account performance trends and follower growth.
#### Data Types Collected
1. **Dashboard Data**: Overview data of the account, including follower count, likes, views, etc.
2. **Content Analysis Data**: Performance data of notes, including views, likes, comments, etc.
3. **Follower Data**: Follower growth trends, follower profile analysis, etc.
#### Scheduled Task Example
Using cron syntax, write to the configuration file .env
```bash
# Collect every 6 hours
COLLECTION_SCHEDULE=0 */6 * * *
# Collection at 9 AM on Workdays
COLLECTION_SCHEDULE=0 9 * * 1-5
# Collection at 2 AM on the 1st of every month
COLLECTION_SCHEDULE=0 2 1 * *
### 🎯 Manual Operation Tools
An interactive menu and manual operation tools have been added to provide a more convenient operating experience:
#### Main Features
- **🔄 Data Collection**: Manually trigger data collection, supporting the selection of data types and time dimensions
- **🌐 Browser Operations**: Quickly open various pages of Xiaohongshu that are already logged in
- **📊 Data Management**: Export to Excel/JSON, analyze data trends, backup and restore
- **🍪 Cookie Management**: Retrieve, view, and verify the status of Cookies
#### Usage Example
```bash
# Start Interactive Menu
./xhs
# Or use the command line
./xhs manual collect --type all # Collect all data
./xhs manual browser --page publish # Open the publish page
./xhs manual export --format excel # Export to Excel
./xhs manual analyze # Analyze data trends
```
## 🚀 Changelog - v1.3.0
### 🎯 Important Feature Updates
#### 🏷️ Topic Tag Automation Feature (Complete Implementation)
- **Brand New Topic Automation System**: Achieves truly effective Xiaohongshu topic tag addition based on rigorous Playwright validation testing.
- **Intelligent Input Mechanism**: Uses the Actions class for character-by-character input and JavaScript event simulation, perfectly mimicking real user actions.
- **Complete DOM Validation**: Supports detection of `data-topic` attributes and hidden identifiers, ensuring topics receive platform traffic recommendations.
- **Multiple Backup Solutions**: Various input methods and validation mechanisms provide a success rate guarantee of over 99%.
#### 🔧 Topic Architecture Refactoring Upgrade
- **Terminology Unification**: Fully refactored from "标签" (label) to "话题" (topic), in line with the terminology of the Xiaohongshu platform.
- **Component Design**: Added a dedicated module `topic_automation.py`, providing basic and advanced automation features.
- **Interface Unification**: Updated all models, interfaces, and server code to maintain backward compatibility.
#### 🧪 Key Fixes Based on Actual Measurements
- **Input Method Fix**: Resolves the issue where directly using `send_keys` does not trigger the dropdown menu.
- **Validation Mechanism Improvement**: Multi-layer validation ensures successful topic transitions, including comprehensive metadata checks.
- **Error Handling Enhancement**: Multiple fallback options are available even if a certain step fails, ensuring functional stability.
### Usage Example
```python
```
# New Topic Feature Usage (Automatically Supported in MCP Tool)
```python
smart_publish_note(
title="AI Learning Insights",
content="Sharing some experiences in learning artificial intelligence",
topics=["AI", "Artificial Intelligence", "Learning Insights"], # New topic parameter
images=["image.jpg"]
)
```
### Technical Details
- **Verification Test Coverage**: Based on three rigorous Playwright verification tests
- **DOM Structure Adaptation**: Fully compatible with the real topic tag DOM structure of Xiaohongshu
- **Performance Optimization**: Intelligent waiting mechanism and concurrent processing to enhance automation efficiency
### Test Results

---
<details>
<summary>📜 Click to view v1.2.5 changelog</summary>
## 🚀 Changelog - v1.2.5
### New Features
#### 🎮 Interactive Menu System
- Unified entry `./xhs`, no need to remember complex commands
- Numeric selection menu for more intuitive operation
- Real-time status display to understand system status
- Supports Windows (xhs.bat) and Unix systems
#### 🛠️ Manual Operation Toolkit
- **manual collect**: Manual data collection, supports selecting types and dimensions
- **manual browser**: Open the logged-in browser for quick access to various pages
- **manual export**: Export data in Excel or JSON format
- **manual analyze**: Analyze data trends and view the best notes
- **manual backup/restore**: Data backup and restore functionality
#### 🔧 Improved Dependency Management
- Intelligent detection of uv/pip environment
- Automatic selection of the best Python environment
- New `install_deps.py` installation wizard
- Supports both uv and pip installation methods
### Optimization Improvements
- Simplified the startup command, unified to use `./xhs`
- Improved Windows support by providing bat and PowerShell scripts
- Optimized code structure by splitting modules to avoid overly large single files
- Enhanced error handling and user prompts
</details>
---
<details>
<summary>📜 Click to view the v1.2.4 changelog</summary>
## 🚀 Changelog - v1.2.4
### New Features
#### 🌐 Network Image Support
- Supports direct publishing of HTTP/HTTPS image links
- Automatically downloads network images to a local temporary directory
- Supports common image formats (jpg, png, gif, webp)
#### 📁 Improved Image Processing
- Added `ImageProcessor` module to unify the handling of various image inputs
- Supports mixed inputs: `["local.jpg", "https://example.com/img.jpg"]`
- More flexible input format support
### Usage Example
```python
```
# Network Images
smart_publish_note(
title="Food Sharing",
content="Today's Food",
images=["https://example.com/food.jpg"]
)
# Mixed Usage
```python
smart_publish_note(
title="Travel Journal",
content="The scenery is beautiful",
images=["/local/photo.jpg", "https://example.com/view.jpg"]
)
```
### Other Optimizations
- Improved text processing to retain line breaks
- Updated documentation notes
</details>
---
<details>
<summary>📜 Click to view the v1.2.3 changelog</summary>
## 🚀 Changelog - v1.2.3
### 🔧 Important Fixes
#### 🖥️ Headless Mode Optimization
- **Fix headless mode failure issue**: Enhance Chrome headless mode configuration by adding multiple insurance parameters.
- **Optimize browser startup logic**: Use both `--headless=new` and `--headless` for dual headless mode configuration.
- **Optimize configuration validation**: Ensure all modules use a unified HEADLESS configuration to avoid inconsistencies.
### 💡 Details
- Added multiple Chrome parameters such as `--disable-gpu-compositing`, `--disable-notifications`, etc.
- Improved the asynchronous initialization logic during MCP Server startup.
- Enhanced compatibility and stability in the Windows environment.
</details>
---
<details>
<summary>📜 Click to view the v1.2.2 changelog</summary>
## 🚀 Changelog - v1.2.2
### 🆕 New Features
#### 🔐 Intelligent Login System
- Added an automated login detection mechanism that supports non-interactive login in MCP mode.
- Implemented a quadruple detection mechanism: URL status, page elements, authentication, and error status detection.
- Added an intelligent waiting mechanism to automatically monitor the login completion status.
- Optimized the cookies saving logic to differentiate between interactive mode and automated mode.
#### 🧠 Intelligent Path Parsing System
- Added intelligent file path recognition feature, supporting automatic parsing of various input formats
- Introduced the `smart_parse_file_paths()` function, utilizing multiple parsing methods such as JSON parsing and ast.literal_eval
- Adapted for LLM (Large Language Model) dialogue scenarios and array data transmission on platforms like dify
**Supported Input Formats**:
- Comma-separated: `"a.jpg,b.jpg,c.jpg"`
- Array string: `"[a.jpg,b.jpg,c.jpg]"`
- JSON array: `'["a.jpg","b.jpg","c.jpg"]'`
- Real array: `["a.jpg", "b.jpg", "c.jpg"]`
- Mixed format: `"[a.jpg,'b.jpg',\"c.jpg\"]"`
#### 🛠️ Code Architecture Optimization
- Refactored login-related modules to improve code maintainability
- Optimized exception handling mechanism to enhance system stability
### 🔧 Fix Features
#### 📝 Path Processing Optimization
- Addressed the issue of format recognition for multiple image uploads reported by users
- Smartly distinguishes between string and array formats to avoid data type judgment errors
- Supports various data formats passed from different platforms (dify, LLM conversations, etc.)
- Enhanced fault tolerance, allowing for parsing even when the format is non-standard
</details>
---
## 🚀 Development Roadmap
### 📋 Features to be Developed
#### 🔥 High Priority
- **🔐 Headless Mode Login** - Improve the automatic login process in headless mode to enhance the automation experience.
#### 🔮 Long-term Planning
- **🤖 AI Creation Statement** - Intelligent detection of AI-generated content, automatically adding a creation statement label
- **👥 Multi-account Management** - Supports switching between multiple accounts for publishing (in accordance with platform policies, single IP is limited to 3 accounts)
- **🌐 Proxy Mode Support** - Works with multi-account functionality to support proxy network access
- **🐳 Docker Containerization** - Provides a containerized deployment solution for easier management and deployment of multiple instances
- **🔍 Content Review Mechanism** - Sensitive word reminders or filtering
## 🔧 Troubleshooting
### Common Issues with ChromeDriver
#### ❌ Problem: Version Mismatch Error
```
selenium.common.exceptions.SessionNotCreatedException: session not created: This version of ChromeDriver only supports Chrome version XX
```
**✅ Solution**:
1. 🔍 Check Chrome version: Visit `chrome://version/`
2. 📥 Download the corresponding version of ChromeDriver: [Chrome for Testing](https://googlechromelabs.github.io/chrome-for-testing/)
3. ⚙️ Update the path configuration in the `.env` file
#### ❌ Problem: ChromeDriver Not Found
```
selenium.common.exceptions.WebDriverException: 'chromedriver' executable needs to be in PATH
```
**✅ Solution**:
1. Ensure that ChromeDriver has been downloaded and extracted.
2. Option A: Add ChromeDriver to the system PATH.
3. Option B: Configure the full path in `.env`: `WEBDRIVER_CHROME_DRIVER="/path/to/chromedriver"`
4. Linux/macOS: Ensure the file has execute permissions `chmod +x chromedriver`
#### ❌ Problem: Incorrect Chrome Browser Path
```
selenium.common.exceptions.WebDriverException: unknown error: cannot find Chrome binary
```
**✅ Solution**: Configure the correct Chrome path in the `.env` file
```bash
# macOS
CHROME_PATH="/Applications/Google Chrome.app/Contents/MacOS/Google Chrome"
# Windows
CHROME_PATH="C:\Program Files\Google\Chrome\Application\chrome.exe"
# Linux
CHROME_PATH="/usr/bin/google-chrome"
```
### Other Common Issues
#### ❌ Problem: MCP connection failed
**✅ Solution**:
1. Confirm that the server is running: `python xhs_toolkit.py server start`
2. Check if port 8000 is occupied
3. Restart Claude Desktop or other MCP clients
#### ❌ Problem: Login Failed
**✅ Solution**:
1. Clear old cookies: Delete the `xhs_cookies.json` file
2. Re-fetch cookies: `python xhs_toolkit.py cookie save`
3. Ensure you are using the correct Xiaohongshu (小红书) Creator Center account
## 🙏 Contributors
Thank you to everyone who has contributed to the project!
<a href="https://github.com/aki66938/xhs-toolkit/graphs/contributors">
<img src="https://contrib.rocks/image?repo=aki66938/xhs-toolkit" />
</a>
If you would like to contribute to the project as well, feel free to submit a Pull Request or Issue!
## 📄 License
This project is open source under the [MIT License](LICENSE).
## 🔐 Security Commitment
- ✅ **Local Storage**: All data is stored locally only
- ✅ **Open Source Transparency**: The code is fully open source and auditable
- ✅ **User Control**: You have complete control over your data
<div align="center">
Made with ❤️ for content creators
</div>
Connection Info
You Might Also Like
semantic-kernel
Build and deploy intelligent AI agents with Semantic Kernel's orchestration...
repomix
Repomix packages your codebase into AI-friendly formats for easy integration.
Serena
Serena is a free, open-source toolkit that enhances LLMs with IDE-like coding tools.
BuildingAI
BuildingAI is a web application powered by NestJS and TypeORM for AI development.
cunzhi
Cunzhi prevents AI from prematurely ending conversations, ensuring deeper engagement.
ElevenLabs
Official ElevenLabs MCP server for interactive model context.