Content
# ComfyUI-AnimaTool
> [!NOTE]
> **Update Today**
> - 🎰 **Ten Consecutive Draws** — Support `repeat` / `batch_size`, generating multiple images at once is no longer a dream
> - 🔄 **Switching Card Pools** — Now you can switch UNET / CLIP / VAE models, try out new card pools
> - 📋 **Visit History** — View historical generation records, support reroll to redraw
>
> ✅ **Simplified Cloud/Remote Connection** — Added `uvx` installation-free mode, one-line configuration to connect to remote ComfyUI, no local environment required
> ✅ **MCP Client Released** — [SillyTavern MCP Client](https://github.com/Moeblack/sillytavern-mcp-client), supporting stdio + Streamable HTTP transmission
>
> ✅ **Tool Usage Experience Fix** — [Tool Use Fix](https://github.com/Moeblack/sillytavern-tooluse-fix), merging fragmented messages, direct image display, Swipe proxy
> [!TIP]
> **Cherry Studio Now Supports MCP Image Display!**
> Our submitted PR has fixed Cherry Studio's handling of MCP `ImageContent`. Before the official merge, you can experience the complete MCP image functionality using the preview version:
> **Download Preview Version** → [Cherry Studio v1.7.17-preview](https://github.com/Moeblack/cherry-studio/releases/tag/v1.7.17-preview)
> Includes the following fixes:
> - Support for configuring whether to send MCP tool images to the model for each assistant
> - Fix base64 image data processing for OpenAI compatible providers
> - Fix MCP multimodal tool result conversion for Gemini
> - **Performance Fix v2**: Solve severe lag issues after multiple rounds of image generation — IPC excludes large slices + base64 strip in place (fixes the issue with Zod safeParse clone causing strip to be invalid in v1.7.16-preview2) — [Upstream PR #12766](https://github.com/CherryHQ/cherry-studio/pull/12766)
<p align="center">
<img src="assets/hero.webp" alt="ComfyUI-AnimaTool Demo" width="100%">
</p>
<p align="center">
<b>Let AI Agents Directly Generate 2D Images, Natively Displayed in Chat Windows</b>
</p>
<p align="center">
Cursor / Claude / Gemini / OpenAI → MCP / HTTP API → ComfyUI → Anima Model
</p>
---

## Documentation
- [📖 Wiki & Prompt Guide](https://github.com/Moeblack/ComfyUI-AnimaTool/wiki) - Detailed prompt guide, installation tutorial, and API documentation.
- [🤖 Cursor Skill](CURSOR_SKILL.md) - **Required for Cursor / Windsurf users**! Use this file content as an Agent Skill to let AI learn how to write high-quality prompts.
## Features
- **MCP Server**: Images natively displayed in Cursor/Claude chat windows
- **HTTP API**: Starts with ComfyUI, no extra service required
- **Structured Prompts**: Automatically concatenated according to Anima specifications
- **Multi-Aspect Ratio Support**: 21:9 to 9:21 (14 preset ratios)
- **Reroll / History**: Supports re-generation based on history, covering some parameters (change artist, add LoRA, etc.)
- **Batch Generation**: `repeat` parameter submits multiple independent tasks (queue mode), `batch_size` generates multiple images within a single task
---
## Related Projects
### SillyTavern Family
When using SillyTavern ( Tavern ) with AnimaTool to generate images? It's recommended to install the following companion plugins:
| Project | Description |
|------|------|
| [SillyTavern MCP Client](https://github.com/Moeblack/sillytavern-mcp-client) | Tavern MCP client, connects to AnimaTool and other MCP Servers, supports stdio + Streamable HTTP |
| [SillyTavern Tool Use Fix](https://github.com/Moeblack/sillytavern-tooluse-fix) | Tool usage experience fix, merges fragmented messages, directly displays images in conversations |
```
ComfyUI-AnimaTool (this project, MCP Server)
↕ MCP Protocol (stdio / streamable-http)
SillyTavern MCP Client (connection + tool registration)
↕ SillyTavern Tool Calling
Tool Use Fix (combined display + experience optimization)
```
### AnimaLoraToolkit - LoRA Training Tool
If you want to train your own LoRA/LoKr to use with Anima, it's recommended to use **[AnimaLoraToolkit](https://github.com/Moeblack/AnimaLoraToolkit)**:
- **YAML Configuration File** - Loaded via `--config`, command-line parameters can override
- **LoRA / LoKr Dual Mode** - Standard LoRA and LyCORIS LoKr
- **ComfyUI Compatible** - Output safetensors can be directly used in this tool
- **JSON Caption Support** - Structured tags, categorized shuffle
- **Real-time Training Monitoring** - Web interface displays loss curve and sampling images
- **Checkpoint Recovery** - Saves complete training state, supports resume training
After training, place the LoRA in the `ComfyUI/models/loras/` directory to load and use it with the `loras` parameter.
#### Example: Cosmic Princess Kaguya LoKr
LoKr (style + character) trained using AnimaLoraToolkit, replicating the 4K theatrical version of the Netflix animated movie "The Princess of the Cosmos!":
- **Download**: [Civitai](https://civitai.com/models/2366705)
- **Trigger Words**: `@spacetime kaguya` (style), `cosmic princess kaguya` (work)
- **Recommended Weight**: 0.8 - 1.0
---
## Installation
### Cherry Studio Users
If you're using Cherry Studio as your MCP client, you need to install our preview version to correctly display MCP-returned images:
1. Download [Cherry Studio v1.7.17-preview](https://github.com/Moeblack/cherry-studio/releases/tag/v1.7.17-preview) (installation or portable version)
2. After installation, follow the "Method 1: MCP Server" configuration below
3. Generated images will be directly displayed in the chat window
> The official Cherry Studio version has not merged this fix yet; using the official version will cause images to display as base64 gibberish, and severe lag will occur after multiple rounds of image generation.
> v1.7.17-preview is based on upstream v1.7.17 and fixes memory expansion and UI freezing issues after image generation ([details](https://github.com/CherryHQ/cherry-studio/pull/12766)).
### Method 1: ComfyUI Manager (Recommended)
1. Open ComfyUI Manager
2. Search for "Anima Tool"
3. Click Install
4. Restart ComfyUI
### Method 2: Manual Install
```bash
cd ComfyUI/custom_nodes
git clone https://github.com/Moeblack/ComfyUI-AnimaTool.git
pip install -r ComfyUI-AnimaTool/requirements.txt
```
### Prerequisites
Ensure the following model files are placed in the corresponding ComfyUI directories:
| File | Path | Description |
|------|------|------|
| `anima-preview.safetensors` | `models/diffusion_models/` | Anima UNET |
| `qwen_3_06b_base.safetensors` | `models/text_encoders/` | Qwen3 CLIP |
| `qwen_image_vae.safetensors` | `models/vae/` | VAE |
Model download: [circlestone-labs/Anima on Hugging Face](https://huggingface.co/circlestone-labs/Anima)
---
## Usage
### Method 0: Independent MCP (Recommended: Cloud/Remote ComfyUI or No custom_nodes)
If you only want to connect to a running ComfyUI (local or cloud) and don't want to put this repository into `ComfyUI/custom_nodes/`, you can use the independent PyPI package **[`comfyui-animatool`](https://github.com/Moeblack/animatool-mcp)** (use `animatool-mcp` command after installation) (only works through ComfyUI standard API: `/prompt`, `/history/<id>`, `/view?...`).
#### Installation
**Method 1: Using uvx (Recommended, No Installation)**
No need to manually install Python packages; directly use `uvx` in Cursor configuration:
*(See JSON configuration below)*
**Method 2: Using pip**
```bash
pip install comfyui-animatool
```
**Method 3: Source Code Installation (for Development)**
```bash
pip install -e ./animatool-mcp
```
#### Cursor Configuration
Create `.cursor/mcp.json` in the project root directory (using `uvx` as an example):
```json
{
"mcpServers": {
"anima-tool": {
"command": "uvx",
"args": ["--from", "comfyui-animatool", "animatool-mcp"],
"env": {
"COMFYUI_URL": "http://127.0.0.1:8188",
"ANIMATOOL_CHECK_MODELS": "false"
}
}
}
}
```
#### Cloud Authentication (Optional)
If cloud ComfyUI requires authentication (reverse proxy/VPN/gateway, etc.), you can additionally set:
- `ANIMATOOL_BEARER_TOKEN`
- Or `ANIMATOOL_HEADERS_JSON` (custom Header JSON string)
> This method **does not depend** on installing this custom node; as long as `COMFYUI_URL` is accessible.
---
### Method 1: MCP Server (Recommended, Native Image Display)
#### Cursor Configuration
Create `.cursor/mcp.json` in the project root directory:
```json
{
"mcpServers": {
"anima-tool": {
"command": "<PATH_TO_PYTHON>",
"args": ["<PATH_TO>/ComfyUI-AnimaTool/servers/mcp_server.py"]
}
}
}
```
**Example (Windows)**:
```json
{
"mcpServers": {
"anima-tool": {
"command": "C:\\ComfyUI\\.venv\\Scripts\\python.exe",
"args": ["C:\\ComfyUI\\custom_nodes\\ComfyUI-AnimaTool\\servers\\mcp_server.py"]
}
}
}
```
#### Install MCP Dependencies
```bash
pip install mcp
```
#### Usage
1. Ensure ComfyUI is running on `http://127.0.0.1:8188`
2. Restart Cursor to load MCP Server
3. Directly let AI generate images:
> Draw a girl in a white dress in a garden, vertical 9:16, safe
Images will be **natively displayed** in the chat window.
---
### Method 2: ComfyUI Built-in HTTP API
After starting ComfyUI, the following routes are automatically registered:
| Route | Method | Description |
|------|------|------|
| `/anima/health` | GET | Health Check |
| `/anima/schema` | GET | Tool Schema |
| `/anima/knowledge` | GET | Expert Knowledge |
| `/anima/generate` | POST | Execute Generation (supports `repeat` batch) |
| `/anima/history` | GET | View Recent Generation History |
| `/anima/reroll` | POST | Re-generate Based on History |
#### Example Call
**PowerShell**:
```powershell
$body = @{
aspect_ratio = "3:4"
quality_meta_year_safe = "masterpiece, best quality, newest, year 2024, safe"
count = "1girl"
artist = "@fkey, @jima"
tags = "upper body, smile, white dress"
neg = "worst quality, low quality, blurry, bad hands, nsfw"
} | ConvertTo-Json -Depth 10
Invoke-RestMethod -Uri "http://127.0.0.1:8188/anima/generate" -Method Post -Body $body -ContentType "application/json"
```
**curl**:
```bash
curl -X POST http://127.0.0.1:8188/anima/generate \
-H "Content-Type: application/json" \
-d '{"aspect_ratio":"3:4","quality_meta_year_safe":"masterpiece, best quality, newest, year 2024, safe","count":"1girl","artist":"@fkey, @jima","tags":"upper body, smile, white dress","neg":"worst quality, low quality, blurry, bad hands, nsfw"}'
```
---
### Method 3: Independent FastAPI Server
```bash
cd ComfyUI-AnimaTool
pip install fastapi uvicorn
python -m servers.http_server
```
Access `http://127.0.0.1:8000/docs` to view Swagger UI.
---
## Parameters
### Required
| Parameter | Type | Description |
|------|------|------|
| `quality_meta_year_safe` | string | Quality/Year/Safety Label (must contain safe/sensitive/nsfw/explicit) |
| `count` | string | Number of people (`1girl`, `2girls`, `1boy`) |
| `artist` | string | Artist, **must include `@`** (e.g., `@fkey, @jima`) |
| `tags` | string | Danbooru Tags |
| `neg` | string | Negative Prompts |
### Optional
| Parameter | Type | Default Value | Description |
|------|------|--------|------|
| `aspect_ratio` | string | - | Aspect Ratio (automatically calculates resolution) |
| `width` / `height` | int | - | Directly specify resolution |
| `character` | string | `""` | Character Name |
| `series` | string | `""` | Series Name |
| `appearance` | string | `""` | Appearance Description |
| `style` | string | `""` | Style |
| `environment` | string | `""` | Environment/Lighting |
| `steps` | int | 25 | Steps |
| `cfg` | float | 4.5 | CFG |
| `seed` | int | Random | Seed |
| `sampler_name` | string | `er_sde` | Sampler |
| `repeat` | int | 1 | Submit several independent generation tasks (queue mode, each with a random seed). Total images = repeat × batch_size |
| `batch_size` | int | 1 | Generate several images within a single task (latent batch mode, more memory-intensive) |
| `loras` | array | `[]` | Optional: append LoRA (only for UNET). `name` is the relative path under `ComfyUI/models/loras/` (can include subdirectories), example: `[{"name":"_Anima/cosmic_kaguya_lokr_epoch4_comfyui.safetensors","weight":0.8}]` |
### Supported Aspect Ratios
```
Landscape: 21:9, 2:1, 16:9, 16:10, 5:3, 3:2, 4:3
Square: 1:1
Portrait: 3:4, 2:3, 3:5, 10:16, 9:16, 1:2, 9:21
```
---
### LoRA (Optional)
> The current version chains `LoraLoaderModelOnly` between **UNETLoader → KSampler(model)**, so it **only affects UNET** (won't change CLIP).
#### 1) Place LoRA in ComfyUI's loras directory
Your LoRA path (example):
- `G:\\AIGC\\ComfyUICommon\\models\\loras\\_Anima\\cosmic_kaguya_lokr_epoch4_comfyui.safetensors`
Corresponding request `loras[i].name` should be written as (relative to `models/loras/`):
- Recommended writing method: `_Anima/cosmic_kaguya_lokr_epoch4_comfyui.safetensors`
> Note: ComfyUI will actually verify if `lora_name` is in the return list of `GET /models/loras`.
> - In Windows environment, this list usually uses backslash paths (e.g., `_Anima\\cosmic_kaguya_lokr_epoch4_comfyui.safetensors`)
> - **This project will automatically normalize separators based on the return value of `/models/loras`** (you can use `/` or `\\`), but if you manually fill in `lora_name` in ComfyUI, please be sure to copy the interface return value.
#### 2) Pass loras parameter during generation
You can directly refer to the example in this repository: [`examples/requests/generate_with_cosmic_kaguya_lora.json`](examples/requests/generate_with_cosmic_kaguya_lora.json)
```json
{
"aspect_ratio": "3:4",
"quality_meta_year_safe": "newest, year 2024, safe",
"count": "1girl",
"character": "kaguya",
"series": "cosmic princess kaguya",
"artist": "@spacetime kaguya",
"appearance": "long hair, black hair, purple eyes",
"tags": "school uniform, smile, standing, looking at viewer",
"environment": "classroom, window, sunlight",
"nltags": "A cheerful girl stands by the window.",
"neg": "worst quality, low quality, blurry, bad hands, nsfw",
"loras": [
{
"name": "_Anima/cosmic_kaguya_lokr_epoch4_comfyui.safetensors",
"weight": 0.9
}
]
}
```
#### 3) (Optional) Write sidecar metadata for LoRA to make it visible in MCP's list tool
To avoid exposing the entire `loras` directory indiscriminately to the MCP client, this project **forcibly returns only LoRAs with existing same-name `.json` sidecar metadata files** when `list_anima_models(model_type="loras")`.
- LoRA file: `ComfyUI/models/loras/_Anima/cosmic_kaguya_lokr_epoch4_comfyui.safetensors`
- sidecar: `ComfyUI/models/loras/_Anima/cosmic_kaguya_lokr_epoch4_comfyui.safetensors.json`
Example sidecar file can be referred to:
- [`examples/loras/_Anima/cosmic_kaguya_lokr_epoch4_comfyui.safetensors.json`](examples/loras/_Anima/cosmic_kaguya_lokr_epoch4_comfyui.safetensors.json)
> The field structure of the sidecar JSON is completely customizable, and this project only requires it to be valid JSON.
> Note: To allow the MCP server to read this sidecar, you also need to set `COMFYUI_MODELS_DIR` to point to the **root directory of models** on your machine (e.g., `C:\\ComfyUI\\models`; your example is `G:\\AIGC\\ComfyUICommon\\models`). Remote ComfyUI scenarios usually cannot read remote file systems, so only "direct use of loras parameters" is supported, not listing.
---
## Important Rules
1. **Artist must be prefixed with `@`**: e.g., `@fkey, @jima`, otherwise it is almost ineffective
2. **Must specify safety labels**: `safe` / `sensitive` / `nsfw` / `explicit`
3. **Recommended artist combination**: `@fkey, @jima` (stable effect)
4. **Resolution is about 1MP**: Anima preview version is more stable
5. **Prompt does not wrap lines**: single-line comma connection
---
## Directory Structure
```
ComfyUI-AnimaTool/
├── __init__.py # ComfyUI extension (registers /anima/* routes)
├── executor/ # Core executor
│ ├── anima_executor.py
│ ├── config.py
│ ├── history.py # Generation history manager (memory + JSONL persistence)
│ └── workflow_template.json
├── knowledge/ # Expert knowledge base
│ ├── anima_expert.md
│ ├── artist_list.md
│ └── prompt_examples.md
├── schemas/ # Tool Schema
│ └── tool_schema_universal.json
├── servers/
│ ├── mcp_server.py # MCP Server (native image return)
│ ├── http_server.py # Independent FastAPI
│ └── cli.py # Command-line tool
├── assets/ # Screenshots and other resources
├── outputs/ # Generated images (gitignore)
├── README.md
├── LICENSE
├── CHANGELOG.md
├── pyproject.toml
└── requirements.txt
```
---
## Configuration
### Environment Variables (Recommended)
All configurations can be overridden using environment variables without modifying the code:
#### Basic Configuration
| Environment Variable | Default Value | Description |
|----------|--------|------|
| `COMFYUI_URL` | `http://127.0.0.1:8188` | ComfyUI service address |
| `ANIMATOOL_TIMEOUT` | `600` | Generation timeout (seconds) |
| `ANIMATOOL_DOWNLOAD_IMAGES` | `true` | Whether to save images locally |
| `ANIMATOOL_OUTPUT_DIR` | `./outputs` | Image output directory |
| `ANIMATOOL_TARGET_MP` | `1.0` | Target pixel count (MP) |
| `ANIMATOOL_ROUND_TO` | `16` | Resolution alignment multiple |
#### Model Configuration
| Environment Variable | Default Value | Description |
|----------|--------|------|
| `COMFYUI_MODELS_DIR` | *(not set)* | ComfyUI models directory path, used for model pre-check; also used for **LoRA sidecar metadata reading** (`list_anima_models(model_type="loras")`) |
| `ANIMATOOL_UNET_NAME` | `anima-preview.safetensors` | UNET model file name |
| `ANIMATOOL_CLIP_NAME` | `qwen_3_06b_base.safetensors` | CLIP model file name |
| `ANIMATOOL_VAE_NAME` | `qwen_image_vae.safetensors` | VAE model file name |
| `ANIMATOOL_CHECK_MODELS` | `true` | Whether to enable model pre-check |
### Set Environment Variables in Cursor MCP Configuration
```json
{
"mcpServers": {
"anima-tool": {
"command": "C:\\ComfyUI\\.venv\\Scripts\\python.exe",
"args": ["C:\\ComfyUI\\custom_nodes\\ComfyUI-AnimaTool\\servers\\mcp_server.py"],
"env": {
"COMFYUI_URL": "http://127.0.0.1:8188",
"COMFYUI_MODELS_DIR": "C:\\ComfyUI\\models"
}
}
}
}
```
### Model Pre-check
After setting `COMFYUI_MODELS_DIR`, model files will be checked for existence before generation:
```json
"env": {
"COMFYUI_MODELS_DIR": "C:\\ComfyUI\\models"
}
```
If model files are missing, a friendly prompt will be given:
```
The following model files are missing:
- unet: diffusion_models/anima-preview.safetensors
- clip: text_encoders/qwen_3_06b_base.safetensors
Please download from HuggingFace: https://huggingface.co/circlestone-labs/Anima
and place them in the corresponding subdirectories of ComfyUI/models/
```
**Remote ComfyUI scenarios**: If `COMFYUI_MODELS_DIR` is not set, pre-check will be skipped (because remote file systems cannot be accessed).
### Remote/Docker ComfyUI Configuration
If ComfyUI is not running on the local machine:
**Other computers on the same LAN**:
```bash
export COMFYUI_URL=http://192.168.1.100:8188
```
**Docker container accessing the host machine**:
```bash
export COMFYUI_URL=http://host.docker.internal:8188
```
**WSL accessing Windows**:
```bash
export COMFYUI_URL=http://$(cat /etc/resolv.conf | grep nameserver | awk '{print $2}'):8188
```
---
## Troubleshooting
### Error: Unable to Connect to ComfyUI
**Symptoms**: `Connection refused` or `Unable to connect to ComfyUI`
**Troubleshooting steps**:
1. Confirm ComfyUI is started: access `http://127.0.0.1:8188` in the browser
2. Confirm the port is correct: default is 8188, if changed, set `COMFYUI_URL`
3. Confirm the firewall does not block (Windows Defender / enterprise firewall)
4. If ComfyUI is remote/Docker, set the correct `COMFYUI_URL`
### Error: H,W should be divisible by spatial_patch_size
**Symptoms**: `H,W (xxx, xxx) should be divisible by spatial_patch_size 2`
**Cause**: Resolution is not a multiple of 16
**Solution**:
- Use preset `aspect_ratio` (e.g., `16:9`, `9:16`, `1:1`)
- If manually specifying `width`/`height`, ensure they are multiples of 16 (e.g., 512, 768, 1024)
### Error: Model File Does Not Exist
**Symptoms**: ComfyUI console reports `FileNotFoundError` or `Model not found`
**Solution**: Confirm the following files exist:
| File | Location |
|------|------|
| `anima-preview.safetensors` | `ComfyUI/models/diffusion_models/` |
| `qwen_3_06b_base.safetensors` | `ComfyUI/models/text_encoders/` |
| `qwen_image_vae.safetensors` | `ComfyUI/models/vae/` |
Download link: [circlestone-labs/Anima](https://huggingface.co/circlestone-labs/Anima)
### MCP Server Not Loaded?
1. **Check status**: Cursor Settings → MCP → anima-tool should display green
2. **View logs**: click "Show Output" to view errors
3. **Confirm path**: Python and script paths must be **absolute paths**
4. **Confirm dependencies**: `pip install mcp` (using ComfyUI's Python environment)
5. **Restart Cursor**: must restart after modifying configuration
### Generation Timeout?
**Symptoms**: waiting for a long time then reports `TimeoutError`
**Possible causes**:
- ComfyUI is loading models (first generation is slower)
- GPU memory is insufficient, leading to slow processing
- Step count `steps` is set too high
**Solution**:
- Increase timeout: `export ANIMATOOL_TIMEOUT=1200`
- Reduce step count: `steps: 25` (default value)
- Check ComfyUI console for errors
### API Call Hangs?
Ensure the latest version is used; old versions may have event loop blocking issues.
---
## System Requirements
- **Python**: 3.10+
- **ComfyUI**: latest version
- **GPU**: recommended 8GB+ memory (Anima model is large)
- **Dependencies**: `mcp` (MCP Server), `requests` (optional, HTTP requests)
---
## Credits
- **Anima Model**: [circlestone-labs/Anima](https://huggingface.co/circlestone-labs/Anima)
- **ComfyUI**: [comfyanonymous/ComfyUI](https://github.com/comfyanonymous/ComfyUI)
- **MCP Protocol**: [Anthropic Model Context Protocol](https://github.com/anthropics/anthropic-cookbook/tree/main/misc/model_context_protocol)
---
## License
AGPL-3.0 License - see [LICENSE](LICENSE) for details.
---
## Contributing
Feel free to submit Issues and Pull Requests!
1. Fork this repository
2. Create your feature branch (`git checkout -b feature/AmazingFeature`)
3. Commit your changes (`git commit -m 'Add some AmazingFeature'`)
4. Push to the branch (`git push origin feature/AmazingFeature`)
5. Open a Pull Request
MCP Config
Below is the configuration for this MCP Server. You can copy it directly to Cursor or other MCP clients.
mcp.json
Connection Info
You Might Also Like
OpenAI Whisper
OpenAI Whisper MCP Server - 基于本地 Whisper CLI 的离线语音识别与翻译,无需 API Key,支持...
markitdown
Python tool for converting files and office documents to Markdown.
oh-my-opencode
Background agents · Curated agents like oracle, librarians, frontend...
chatbox
User-friendly Desktop Client App for AI Models/LLMs (GPT, Claude, Gemini, Ollama...)
continue
Continue is an open-source project for seamless server management.
claude-flow
Claude-Flow v2.7.0 is an enterprise AI orchestration platform.