Content
# Coder-Codex-Gemini (CCG)
<div align="center">




[English Docs](README_EN.md)
**Claude + Coder + Codex + Gemini Multi-Model Collaboration Framework**
Let **Claude/Sisyphus** act as the architect to schedule **Coder** to execute code tasks, **Codex** to review code quality, and **Gemini** to provide expert consultation, <br>forming an **automated multi-party collaboration closed loop**.
**Supports both Claude Code (MCP) and OpenCode (Oh-My-OpenCode) runtime environments**
[Quick Start](#-quick-start) • [Core Features](#-core-features) • [Architecture Description](#-architecture-description) • [Tool Details](#️-tool-details) • [OpenCode Configuration](#-opencode-configuration)
</div>
---
## 🌟 Core Features
CCG builds an efficient, low-cost, and high-quality code generation and review pipeline by connecting multiple top-tier models:
| Dimension | Value Description |
| :--- | :--- |
| **🧠 Cost Optimization** | **Claude/Sisyphus** is responsible for high-intelligence thinking and scheduling (expensive but strong), and **Coder** is responsible for heavy code execution (large quantity and sufficient). |
| **🧩 Capability Complementarity** | **Claude** complements **Coder's** creativity shortcomings, **Codex** provides an independent third-party review perspective, and **Gemini** provides diverse expert opinions. |
| **🛡️ Quality Assurance** | Introduces a dual review mechanism: **Claude Preliminary Review** + **Codex Final Review** to ensure code robustness. |
| **🔄 Fully Automated Closed Loop** | Supports a fully automated process of `Decomposition` → `Execution` → `Review` → `Retry`, minimizing manual intervention. |
| **🔧 Flexible Architecture** | Supports two runtime environments: **Claude Code (MCP)** and **OpenCode (Oh-My-OpenCode)**, choose as needed. |
| **🔄 Context Retention** | The **SESSION_ID** session reuse mechanism ensures coherent multi-round collaboration context, supports stable execution of long tasks, and no information loss. |
### 🔀 Two Runtime Environments
| Feature | Claude Code (MCP) | OpenCode (Oh-My-OpenCode) |
|------|-------------------|---------------------------|
| **Architect** | Claude | Sisyphus (Claude Opus) |
| **Tool Invocation** | MCP Protocol | Sub-Agent Delegation |
| **Coder** | claude CLI + Configurable Backend | document-writer Agent |
| **Codex** | codex CLI | oracle Agent |
| **Gemini** | gemini CLI | frontend-ui-ux-engineer Agent |
| **Applicable Scenarios** | Claude Code Users | Preference for Open Source, Multiple LLM Providers |
| **Configuration Complexity** | Medium | Higher |
## 🤖 Role Division and Collaboration
In this system, each model has a clear responsibility:
* **Claude**: 👑 **Architect / Coordinator**
* Responsible for requirement analysis, task decomposition, Prompt optimization, and final decision-making.
* **Coder**: 🔨 **Executor**
* Refers to models that are **large in quantity, sufficient, and strong in execution capabilities** (such as GLM-4.7, DeepSeek-V3, etc.).
* Can be connected to **any third-party model that supports the Claude Code API**, responsible for specific code generation, modification, and batch task processing.
* **Codex (OpenAI)**: ⚖️ **Auditor / Senior Code Consultant**
* Responsible for independent code quality control, providing objective Code Review, and can also serve as a consulting advisor for architecture design and complex solutions.
* **Gemini**: 🧠 **Versatile Expert (Optional)**
* Top-tier AI expert at the same level as Claude, called upon as needed. Can serve as a high-level consultant, independent auditor, or code executor.
### 📊 Real-World Case Study
**[Batch Unit Test Generation](cases/2025-01-05-unit-test-generation/README.md)** - CCG Architecture Real-World Test Record
| Metric | Pure Claude Solution | CCG Collaboration Solution | Description |
| :--- | :--- | :--- | :--- |
| **Task Scale** | 7,488 lines of code (481 test cases) | 7,488 lines of code (481 test cases) | Generate unit tests for a backend project |
| **Total Cost** | $3.13 | $0.55 | **82% Savings** |
| **Claude Cost** | $3.13 | $0.29 | **91% Savings** (Architecture Scheduling Only) |
| **Coder Cost** | $0 | $0.26 | Execute Heavy Code Generation Tasks |
| **Quality Audit** | ❌ No Independent Audit | ✅ Claude Preliminary Review + Codex Final Review | Dual Control, Controllable Code Quality |
**Core Advantages**:
- 💰 **Cost Optimization**: Claude only outputs short instructions, using cheap input prices to handle acceptance work, avoiding the output of expensive code tokens
- 🔄 **Context Retention**: The SESSION_ID session reuse mechanism ensures coherent multi-round collaboration context, supporting stable execution of long tasks
- ⚡ **Long Task Stability**: Optimized task decomposition and retry strategies ensure stable completion of large tasks (such as batch generation of 7,488 lines of test code)
- 🛡️ **Quality Assurance**: Dual review mechanism (Claude Preliminary Review + Codex Final Review), controllable code quality
### Collaboration Flowchart
```mermaid
flowchart TB
subgraph UserLayer ["User Layer"]
User(["👤 User Requirements"])
end
subgraph ClaudeLayer ["Claude - Architect"]
Claude["🧠 Requirement Analysis & Task Decomposition"]
Prompt["📝 Construct Precise Prompt"]
Review["🔍 Result Review & Decision-Making"]
end
subgraph MCPLayer ["MCP Server"]
MCP{{"⚙️ CCG-MCP"}}
end
subgraph ToolLayer ["Execution Layer"]
Coder["🔨 Coder Tool<br><code>claude CLI → Configurable Backend</code><br>sandbox: workspace-write"]
Codex["⚖️ Codex Tool<br><code>codex CLI</code><br>sandbox: read-only"]
Gemini["🧠 Gemini Tool<br><code>gemini CLI</code><br>sandbox: workspace-write"]
end
User --> Claude
Claude --> Prompt
Prompt -->|"coder / gemini"| MCP
MCP -->|"Streaming JSON"| Coder
MCP -->|"Streaming JSON"| Gemini
Coder -->|"SESSION_ID + result"| Review
Gemini -->|"SESSION_ID + result"| Review
Review -->|"Need Audit / Expert Opinion"| MCP
MCP -->|"Streaming JSON"| Codex
Codex -->|"SESSION_ID + Audit Conclusion"| Review
Review -->|"✅ Pass"| Done(["🎉 Task Completed"])
Review -->|"❌ Need Modification"| Prompt
Review -->|"⚠️ Minor Optimization"| Claude
```
**Typical Workflow**:
```
1. User submits a requirement
↓
2. Claude analyzes, decomposes the task, and constructs a precise Prompt
↓
3. Call coder (or gemini) tool → execute code generation/modification
↓
4. Claude reviews the results, decides whether Codex audit or Gemini consultation is needed
↓
5. Call codex (or gemini) tool → independent Code Review / obtain a second opinion
↓
6. Based on the audit conclusion: pass / optimize / re-execute
```
## 🚀 Quick Start
### 1. Prerequisites
Before you begin, make sure you have the following tools installed:
* **uv**: Ultra-fast Python package manager ([Installation Guide](https://docs.astral.sh/uv/))
* Windows: `powershell -c "irm https://astral.sh/uv/install.ps1 | iex"`
* macOS/Linux: `curl -LsSf https://astral.sh/uv/install.sh | sh`
* **Claude Code**: Version **≥ v2.0.56** ([Installation Guide](https://code.claude.com/docs))
* **Codex CLI**: Version **≥ v0.61.0** ([Installation Guide](https://developers.openai.com/codex/quickstart))
* **Gemini CLI** (Optional): If you need to use Gemini tools ([Installation Guide](https://github.com/google-gemini/gemini-cli))
* **Coder Backend API Token**: Needs to be configured by yourself, it is recommended to use GLM-4.7 as a reference case, obtained from [Zhipu AI](https://open.bigmodel.cn).
> **⚠️ Important Notes: Fees and Permissions**
> * **Tool Authorization**: `claude`, `codex`, and `gemini` CLI tools all need to be logged in and authorized locally.
> * **Fee Description**: The use of these tools usually involves official subscription fees or API usage fees.
> * **Claude Code**: Requires an Anthropic account and corresponding billing settings. (or third-party access)
> * **Codex CLI**: Requires an OpenAI account or API quota.
> * **Gemini CLI**: By default, the `gemini-3-pro-preview` model is called (may involve Google AI subscription or API call restrictions).
> * **Coder API**: You need to bear the API call costs of the configured backend model (such as Zhipu AI, DeepSeek, etc.).
> * Please ensure that all tools are logged in and account resources are sufficient before formal use.
### ⚡ One-Click Configuration (Recommended)
We provide a one-click configuration script to automatically complete all setup steps:
**Windows (Double-click to run or execute in the terminal)**
```powershell
git clone https://github.com/FredericMN/Coder-Codex-Gemini.git
cd Coder-Codex-Gemini
.\setup.bat
```
**macOS/Linux**
```bash
git clone https://github.com/FredericMN/Coder-Codex-Gemini.git
cd Coder-Codex-Gemini
chmod +x setup.sh && ./setup.sh
```
**Script Execution Flow**:
1. **Check and install uv** - Automatically download and install if not installed
2. **Check Claude CLI** - Verify if it has been installed
3. **Install project dependencies** - Run `uv sync`
4. **Register MCP Server** - Automatically configure to the user level
5. **Install Skills** - Copy workflow guidance to `~/.claude/skills/`
6. **Configure Global Prompt** - Automatically append to `~/.claude/CLAUDE.md`
7. **Configure Coder** - Interactively enter API Token, Base URL, and Model
**🔐 Security Notes**:
- API Token is not displayed on the screen when entered
- The configuration file is saved in `~/.ccg-mcp/config.toml`, and the permissions are set to be readable and writable only by the current user
- Token is only stored locally and will not be uploaded or shared
> 💡 **Tip**: After the one-click configuration is complete, please restart the Claude Code CLI for the configuration to take effect.
### Windows User Notes
When using CCG-MCP on Windows, make sure the following CLI tools are correctly added to the system PATH:
| Tool | Verification Command | Common Installation Location |
|------|----------|--------------|
| `claude` | `where claude` | `%APPDATA%\npm\claude.cmd` or installed globally via npm |
| `codex` | `where codex` | `%APPDATA%\npm\codex.cmd` or installed globally via npm |
| `gemini` | `where gemini` | `%APPDATA%\npm\gemini.cmd` or installed globally via npm |
| `uv` | `where uv` | `%USERPROFILE%\.local\bin\uv.exe` |
**How to add to PATH**:
1. Open "System Properties" → "Advanced" → "Environment Variables"
2. Find `Path` in "User variables", click "Edit"
3. Add the directory where the tool is located (such as `%APPDATA%\npm`)
4. Restart the terminal for the configuration to take effect
**Verify Installation**:
```powershell
# Check if all tools are available
claude --version
codex --version
gemini --version # Optional
uv --version
```
> **Tip**: If you encounter a "command does not exist" error, check whether the PATH configuration is correct.
### 2. Install MCP Server
#### Remote Installation (Recommended)
The one-click script uses the remote installation method by default, no additional operation is required. If you need to install manually:
```bash
claude mcp add ccg -s user --transport stdio -- uvx --refresh --from git+https://github.com/FredericMN/Coder-Codex-Gemini.git ccg-mcp
```
#### Local Installation (Development and Debugging Only)
If you need to modify the source code or debug, you can use local installation:
```bash
# Enter the project directory
cd /path/to/Coder-Codex-Gemini
# Install dependencies
uv sync
# Register MCP Server (using local path)
# Windows
claude mcp add ccg -s user --transport stdio -- uv run --directory $pwd ccg-mcp
# macOS/Linux
claude mcp add ccg -s user --transport stdio -- uv run --directory $(pwd) ccg-mcp
```
#### Remote Installation vs Local Installation
| Feature | Remote Installation (Recommended) | Local Installation |
|------|-----------------|---------|
| **Stability** | ✅ Independent pull each time, no file locking issues | ⚠️ Concurrent access from multiple terminals may conflict |
| **Applicable Scenarios** | Daily Use | Development and Debugging |
| **Skills Support** | Need to be manually installed to `~/.claude/skills/` | Need to be manually installed (or use the one-click script) |
| **Update Method** | Automatically get the latest version | Need to manually `git pull` |
| **Dependency Requirements** | Requires `git` command | Only requires `uv` |
> **⚠️ Note**: When installing locally, if multiple terminals call MCP at the same time, "MCP is not responding" may occur due to file locking. It is recommended to use the remote installation method for daily use.
**Uninstall MCP Server**
```bash
claude mcp remove ccg -s user
```
### 3. Configure Coder
It is recommended to use the **configuration file** method for management.
> **Configurable Backend**: The Coder tool calls the backend model through the Claude Code CLI. **Needs to be configured by the user**, it is recommended to use GLM-4.7 as a reference case, you can also choose other models that support the Claude Code API (such as Minimax, DeepSeek, etc.).
**Create Configuration Directory**:
```bash
# Windows
mkdir %USERPROFILE%\.ccg-mcp
# macOS/Linux
mkdir -p ~/.ccg-mcp
```
**Create Configuration File** `~/.ccg-mcp/config.toml`:
```toml
[coder]
api_token = "your-api-token" # Required
base_url = "https://open.bigmodel.cn/api/anthropic" # Example: GLM API
model = "glm-4.7" # Example: GLM-4.7, can be replaced with other models
[coder.env]
CLAUDE_CODE_DISABLE_NONESSENTIAL_TRAFFIC = "1"
```
### 4. Install Skills (Recommended)
The Skills layer provides workflow guidance to ensure that Claude uses the MCP tool correctly.
```bash
# Windows (PowerShell)
if (!(Test-Path "$env:USERPROFILE\.claude\skills")) { mkdir "$env:USERPROFILE\.claude\skills" }
xcopy /E /I "skills\ccg-workflow" "$env:USERPROFILE\.claude\skills\ccg-workflow"
# Optional: Install Gemini Collaboration Skill
xcopy /E /I "skills\gemini-collaboration" "$env:USERPROFILE\.claude\skills\gemini-collaboration"
# macOS/Linux
mkdir -p ~/.claude/skills
cp -r skills/ccg-workflow ~/.claude/skills/
# Optional: Install Gemini Collaboration Skill
cp -r skills/gemini-collaboration ~/.claude/skills/
```
### 5. Configure Global Prompt (Recommended)
Add mandatory rules to `~/.claude/CLAUDE.md` to ensure that Claude follows the collaboration process:
```markdown
# Global Protocol
```
## Mandatory Rules
- **Default Collaboration**: All code/document modification tasks **must** be delegated to Coder for execution. Upon completion of each phase, Codex **must** be called for review.
- **Confirmation Required for Skipping**: If it is determined that collaboration is unnecessary, you **must immediately pause** and report:
> "This is a simple [description] task. I believe it is not necessary to call Coder/Codex. Do you agree? Waiting for your confirmation."
- **Violation Leads to Termination**: Skipping Coder execution or Codex review without confirmation = **process violation**.
- **Mandatory Session Reuse**: You must save the received `SESSION_ID` and always carry the `SESSION_ID` in the request parameters to maintain context.
- **SESSION_ID Management Specification**: The SESSION_IDs of each role (Coder/Codex/Gemini) are independent of each other. You must use the actual SESSION_ID value returned by the MCP tool response. It is strictly forbidden to create IDs yourself or mix IDs of different roles.
## ⚠️ Skill Pre-reading Requirements (Mandatory)
**Before calling any CCG MCP tool, you must first execute the corresponding Skill to obtain best practice guidance:**
| MCP Tool | Prerequisite Skill | Execution Method |
|----------|-----------|---------|
| `mcp__ccg__coder` | `/ccg-workflow` | Must execute first |
| `mcp__ccg__codex` | `/ccg-workflow` | Must execute first |
| `mcp__ccg__gemini` | `/gemini-collaboration` | Must execute first |
**Execution Process**:
1. User requests to use Coder/Codex/Gemini
2. **Immediately execute the corresponding Skill** (e.g., `/ccg-workflow`, `/gemini-collaboration`)
3. Read the guidance content returned by the Skill
4. Call the MCP tool according to the guidance
**Prohibited Behaviors**:
- ❌ Skip Skill and directly call MCP tool
- ❌ Assume you already know the best practices without executing Skill
---
# AI Collaboration System
**Claude is the final decision-maker**. All AI opinions are for reference only. Critical thinking is required to make the best decision.
## Role Division
| Role | Positioning | Purpose | sandbox | Retries |
|------|------|------|---------|------|
| **Coder** | Code executor | Generate/modify code, batch tasks | workspace-write | No retry by default |
| **Codex** | Code reviewer/Senior consultant | Architecture design, quality control, Review | read-only | 1 retry by default |
| **Gemini** | Senior consultant (on demand) | Architecture design, second opinion, front-end/UI | workspace-write (yolo) | 1 retry by default |
## Core Process
1. **Coder Execution**: Delegate all modification tasks to Coder for processing
2. **Claude Acceptance**: Quickly check after Coder completes. If there are errors, Claude will fix them by himself
3. **Codex Review**: Call review after phased development is completed. If there are errors, delegate Coder to fix them and continue iterating until passed
## Task Decomposition Principles (Distributed to Coder)
> ⚠️ **One call, one goal**. Do not pile up multiple unrelated requirements to Coder.
- **Precise Prompt**: Clear goals, sufficient context, and clear acceptance criteria
- **Split by module**: Related changes can be merged, and independent modules should be separated
- **Phased Review**: Claude acceptance for each module, Codex review after milestones
## Preparation Before Coding (Complex Tasks)
1. Search for affected symbols/entry points
2. List the files that need to be modified
3. For complex problems, you can communicate the solution with Codex or Gemini first
## Gemini Trigger Scenarios
- **User explicitly requests**: The user specifies to use Gemini
- **Claude autonomously calls**: When designing front-end/UI, requiring a second opinion or independent perspective
```
> **Note**: Pure MCP can also work, but Skills + global Prompt configuration is recommended for the best experience.
### 6. Verify Installation
Run the following command to check the MCP server status:
```bash
claude mcp list
```
✅ Seeing the following output means the installation was successful:
```text
ccg: ... - ✓ Connected
```
### 7. (Optional) Permission Configuration
For a smooth experience, you can add automatic authorization in `~/.claude/settings.json`:
```json
{
"permissions": {
"allow": [
"mcp__ccg__coder",
"mcp__ccg__codex",
"mcp__ccg__gemini"
]
}
}
```
## 🛠️ Tool Details
### `coder` - Code Executor
Call the configurable backend model to execute specific code generation or modification tasks.
> **Configurable Backend**: The Coder tool calls the backend model through Claude Code CLI. **Requires user to configure themselves**. It is recommended to use GLM-4.7 as a reference case. You can also choose other models that support Claude Code API (such as Minimax, DeepSeek, etc.).
| Parameter | Type | Required | Default Value | Description |
| :--- | :--- | :---: | :--- | :--- |
| `PROMPT` | string | ✅ | - | Specific task instructions and code requirements |
| `cd` | Path | ✅ | - | Target working directory |
| `sandbox` | string | - | `workspace-write` | Sandbox policy, allows writing by default |
| `SESSION_ID` | string | - | `""` | Session ID, used to maintain multi-turn dialogue context |
| `return_all_messages` | bool | - | `false` | Whether to return the complete dialogue history (for debugging) |
| `return_metrics` | bool | - | `false` | Whether to include metrics such as time consumption in the return value |
| `timeout` | int | - | `300` | Idle timeout (seconds), timeout is triggered if there is no output exceeding this time |
| `max_duration` | int | - | `1800` | Total duration hard limit (seconds), 30 minutes by default, 0 means unlimited |
| `max_retries` | int | - | `0` | Maximum number of retries (Coder does not retry by default) |
| `log_metrics` | bool | - | `false` | Whether to output metrics to stderr |
### `codex` - Code Reviewer
Call Codex for independent and strict code review.
| Parameter | Type | Required | Default Value | Description |
| :--- | :--- | :---: | :--- | :--- |
| `PROMPT` | string | ✅ | - | Review task description |
| `cd` | Path | ✅ | - | Target working directory |
| `sandbox` | string | - | `read-only` | **Mandatory read-only**, reviewers are strictly prohibited from modifying code |
| `SESSION_ID` | string | - | `""` | Session ID |
| `skip_git_repo_check` | bool | - | `true` | Whether to allow running in non-Git repositories |
| `return_all_messages` | bool | - | `false` | Whether to return the complete dialogue history (for debugging) |
| `image` | List[Path]| - | `[]` | List of attached images (for UI review, etc.) |
| `model` | string | - | `""` | Specify the model, the default is to use Codex's own configuration |
| `return_metrics` | bool | - | `false` | Whether to include metrics such as time consumption in the return value |
| `timeout` | int | - | `300` | Idle timeout (seconds), timeout is triggered if there is no output exceeding this time |
| `max_duration` | int | - | `1800` | Total duration hard limit (seconds), 30 minutes by default, 0 means unlimited |
| `max_retries` | int | - | `1` | Maximum number of retries (Codex allows 1 retry by default) |
| `log_metrics` | bool | - | `false` | Whether to output metrics to stderr |
| `yolo` | bool | - | `false` | Run all commands without approval (skip sandbox) |
| `profile` | string | - | `""` | Name of the configuration file loaded from ~/.codex/config.toml |
### `gemini` - Versatile Expert (Optional)
Call Gemini CLI for code execution, technical consultation, or code review. Top AI expert at the same level as Claude.
| Parameter | Type | Required | Default Value | Description |
| :--- | :--- | :---: | :--- | :--- |
| `PROMPT` | string | ✅ | - | Task instructions, sufficient background information is required |
| `cd` | Path | ✅ | - | Working directory |
| `sandbox` | string | - | `workspace-write` | Sandbox policy, allows writing by default (flexible control) |
| `yolo` | bool | - | `true` | Skip approval, enabled by default |
| `SESSION_ID` | string | - | `""` | Session ID, used for multi-turn dialogues |
| `model` | string | - | `gemini-3-pro-preview` | Specify model version |
| `return_all_messages` | bool | - | `false` | Whether to return the complete dialogue history |
| `return_metrics` | bool | - | `false` | Whether to include metrics such as time consumption in the return value |
| `timeout` | int | - | `300` | Idle timeout (seconds) |
| `max_duration` | int | - | `1800` | Total duration hard limit (seconds) |
| `max_retries` | int | - | `1` | Maximum number of retries |
| `log_metrics` | bool | - | `false` | Whether to output metrics to stderr |
**Role Positioning**:
- 🧠 **Senior Consultant**: Architecture design, technology selection, complex solution discussion
- ⚖️ **Independent Review**: Code Review, solution review, quality control
- 🔨 **Code Execution**: Prototype development, function implementation (especially good at front-end/UI)
**Trigger Scenarios**:
- User explicitly requests to use Gemini
- Claude needs a second opinion or independent perspective
### Timeout Mechanism
This project adopts a **dual timeout protection** mechanism:
| Timeout Type | Parameter | Default Value | Description |
|----------|------|--------|------|
| **Idle Timeout** | `timeout` | 300s | Timeout is triggered if there is no output exceeding this time, the timer is reset if there is output |
| **Total Duration Hard Limit** | `max_duration` | 1800s | Starts timing from the beginning, regardless of whether there is output, it will be forcibly terminated if this time is exceeded |
**Error Type Differentiation**:
- `idle_timeout`: Idle timeout (no output)
- `timeout`: Total duration timeout
### Return Value Structure
```json
// Success (default behavior, return_metrics=false)
{
"success": true,
"tool": "coder",
"SESSION_ID": "uuid-string",
"result": "Reply content"
}
// Success (enable metrics, return_metrics=true)
{
"success": true,
"tool": "coder",
"SESSION_ID": "uuid-string",
"result": "Reply content",
"metrics": {
"ts_start": "2026-01-02T10:00:00.000Z",
"ts_end": "2026-01-02T10:00:05.123Z",
"duration_ms": 5123,
"tool": "coder",
"sandbox": "workspace-write",
"success": true,
"retries": 0,
"exit_code": 0,
"prompt_chars": 256,
"prompt_lines": 10,
"result_chars": 1024,
"result_lines": 50,
"raw_output_lines": 60,
"json_decode_errors": 0
}
}
// Failure (structured error, default behavior)
{
"success": false,
"tool": "coder",
"error": "Error summary",
"error_kind": "idle_timeout | timeout | upstream_error | ...",
"error_detail": {
"message": "Error description",
"exit_code": 1,
"last_lines": ["Last 20 lines of output..."],
"idle_timeout_s": 300,
"max_duration_s": 1800
// "retries": 1 // Only returned when retries > 0
}
}
// Failure (enable metrics, return_metrics=true)
{
"success": false,
"tool": "coder",
"error": "Error summary",
"error_kind": "idle_timeout | timeout | upstream_error | ...",
"error_detail": {
"message": "Error description",
"exit_code": 1,
"last_lines": ["Last 20 lines of output..."],
"idle_timeout_s": 300,
"max_duration_s": 1800
// "retries": 1 // Only returned when retries > 0
},
"metrics": {
"ts_start": "2026-01-02T10:00:00.000Z",
"ts_end": "2026-01-02T10:00:05.123Z",
"duration_ms": 5123,
"tool": "coder",
"sandbox": "workspace-write",
"success": false,
"retries": 0,
"exit_code": 1,
"prompt_chars": 256,
"prompt_lines": 10,
"json_decode_errors": 0
}
}
```
## 📚 Architecture Description
### Three-Layer Configuration Architecture (Claude Code)
This project adopts a mixed architecture of **MCP + Skills + Global Prompt** in the Claude Code environment, with clear responsibilities at each layer:
| Layer | Responsibility | Token Consumption | Necessity |
|------|------|-----------|--------|
| **MCP Layer** | Tool implementation (type safety, structured errors, retries, metrics) | Fixed (tool schema) | **Required** |
| **Skills Layer** | Workflow guidance (trigger conditions, process, templates) | Loaded on demand | Recommended |
| **Global Prompt Layer** | Mandatory rules (ensure Claude follows the collaboration process) | Fixed (approximately 20 lines) | Recommended |
**Why is full configuration recommended?**
- **Pure MCP**: The tool is available, but Claude may not understand when/how to use it
- **+ Skills**: Claude learns the workflow and knows when to trigger collaboration
- **+ Global Prompt**: Mandatory rules ensure that Claude always abides by collaborative discipline
**Token Optimization**: Skills are loaded on demand, and workflow guidance is not loaded for non-code tasks, which can significantly save tokens
---
## 🔄 OpenCode Configuration
> **OpenCode** is an open-source alternative to Claude Code. With the **Oh-My-OpenCode** plugin, similar multi-Agent orchestration effects can be achieved. No additional MCP and SKILLS support is required.
### Applicable Scenarios
- Want to use multiple LLM providers (Claude, GPT, Gemini)
- Need multiple Agents to collaborate in parallel
- Want to see the real-time activity process of each sub-agent
- Prefer open-source tools
### 🆕 New Users vs. Installed Users
| User Type | Recommended Method | Description |
|----------|----------|------|
| **OpenCode not installed** | One-click script | Automatically completes all installation and configuration |
| **OpenCode + Oh-My-OpenCode installed** | Manual configuration | Refer to the template file and merge the configuration as needed |
> ⚠️ **Note for installed users**: The one-click script will detect existing configuration files and ask whether to overwrite them. If you choose to overwrite, the original file will be automatically backed up. It is recommended to choose to skip and then manually merge the required configurations.
### ⚡ One-Click Configuration (Recommended for New Users - Users who have not installed OpenCode)
**Windows (double-click to run or execute in terminal)**
```powershell
git clone https://github.com/FredericMN/Coder-Codex-Gemini.git
cd Coder-Codex-Gemini
.\setup-opencode.bat
```
**macOS/Linux**
```bash
git clone https://github.com/FredericMN/Coder-Codex-Gemini.git
cd Coder-Codex-Gemini
chmod +x setup-opencode.sh && ./setup-opencode.sh
```
**Script Execution Process**:
1. **Check and install dependencies** - bun, opencode CLI
2. **Install Oh-My-OpenCode** - interactively select subscription status
3. **Configure opencode.json** - model definition and API configuration
4. **Configure oh-my-opencode.json** - CCG agent role definition
5. **Configure AGENTS.md** - collaboration agreement
### 📝 Manual Configuration (Recommended for Installed Users)
If you have already installed OpenCode and Oh-My-OpenCode, it is recommended to manually merge the configuration by referring to the following template files:
| Template File | Target Location | Description |
|----------|----------|------|
| [`templates/opencode/opencode.json`](templates/opencode/opencode.json) | `~/.config/opencode/opencode.json` | Model and API configuration |
| [`templates/opencode/oh-my-opencode.json`](templates/opencode/oh-my-opencode.json) | `~/.config/opencode/oh-my-opencode.json` | Agent role definition |
| [`templates/opencode/AGENTS.md`](templates/opencode/AGENTS.md) | `~/.config/opencode/AGENTS.md` | Collaboration agreement |
#### Core Configuration Items
**1. `oh-my-opencode.json` - Agent Role Definition (Important)**
The main configurations needed are the `prompt_append` and `model` for each agent:
> 💡 **About `prompt_append`**: This is the "append prompt," which adds CCG collaboration rules to the original Oh-My-OpenCode prompt without overwriting the original OMO prompt, maximizing compatibility.
```json
{
"agents": {
"Sisyphus": {
"model": "anthropic/claude-opus-4-5-20251101",
"prompt_append": "## CCG 协作规则\n\nYou are an architect..."
},
"document-writer": {
"model": "zhipuai-coding-plan/glm-4.7",
"prompt_append": "## ⚠️ 身份确认:你是 Coder 子代理..."
},
"oracle": {
"model": "openai/gpt-5.1-codex-mini",
"prompt_append": "## ⚠️ 身份确认:你是 Codex 子代理..."
},
"frontend-ui-ux-engineer": {
"model": "google/antigravity-gemini-3-pro-high",
"prompt_append": "## ⚠️ 身份确认:你是 Gemini 子代理..."
}
}
}
```
- **`prompt_append`**: Defines the role behavior specifications for each agent and is the core of CCG collaboration.
- **`model`**: Can be adjusted to your subscribed model as needed.
**2. `opencode.json` - Model and API Configuration**
In my personal use case, most models (OpenAI, Google, Zhipu) are subscribed through OAuth or official API authentication, requiring no additional URL/API configuration.
**Cases requiring third-party relay** (applicable to OpenAI, Claude, and other models):
```json
{
"provider": {
"anthropic": {
"options": {
"baseURL": "https://your-proxy-api.com/v1",
"apiKey": "your-api-key"
},
"models": {
"claude-opus-4-5-20251101": { "name": "claude-opus-4-5-20251101" }
}
}
}
}
```
#### ⚠️ Third-Party API Relay Notes
When using a third-party API relay, **the key (key) of the model name must exactly match the model name supported by the relay station**:
```json
// ✅ Correct: The key name matches the model name supported by the relay station
"models": {
"claude-opus-4-5-20251101": { "name": "claude-opus-4-5-20251101" }
}
// ❌ Incorrect: The key name does not match the relay station, which will cause the call to fail
"models": {
"my-custom-name": { "name": "claude-opus-4-5-20251101" }
}
```
**Please confirm before configuration**:
1. Which model names are supported by your relay station
2. Set the key name under `models` to the exact name supported by the relay station
3. Use the `provider/model-key` format when referencing in `oh-my-opencode.json` (e.g., `anthropic/claude-opus-4-5-20251101`)
### Agent Role Mapping (Template Configuration, Specific Models Can Be Freely Replaced)
| CCG Role | OpenCode Agent | Model | Responsibility |
|----------|---------------|------|------|
| **Architect** | Sisyphus | Claude Opus 4.5 | Requirements analysis, task breakdown, final decision |
| **Coder** | document-writer | GLM-4.7 | Code generation, document modification, batch tasks |
| **Codex** | oracle | GPT-5.1 Codex Mini | Code review, architecture consulting, quality control |
| **Gemini** | frontend-ui-ux-engineer | Gemini 3 Pro High | Frontend/UI, second opinion, independent perspective |
### Authentication Configuration
After installation, you need to complete the authentication for each provider:
```bash
# 1. Anthropic (Claude)
opencode auth login
# → Select: Anthropic → Claude Pro/Max
# 2. OpenAI (ChatGPT/Codex)
opencode auth login
# → Select: OpenAI → ChatGPT Plus/Pro (Codex Subscription)
# 3. Google (Gemini)
opencode auth login
# → Select: Google → OAuth with Google (Antigravity)
```
> ⚠️ **Important**: When using the Antigravity plugin, you must set `"google_auth": false` in `oh-my-opencode.json`.
### Keyboard Shortcuts
| Shortcut | Function |
|:-------|:-----|
| `Tab` | Switch build/plan mode |
| `Ctrl+X` then `B` | Toggle Sidebar |
| `Ctrl+X` then `→/←` | Switch subtask |
| `Ctrl+X` then `↑` | Return to main task |
| `Ctrl+P` | Command Palette |
---
## 🧑💻 Development and Contribution
Welcome to submit Issues and Pull Requests!
```bash
# 1. Clone the repository
git clone https://github.com/FredericMN/Coder-Codex-Gemini.git
cd Coder-Codex-Gemini
# 2. Install dependencies (using uv)
uv sync
# 3. Run local debugging
uv run ccg-mcp
```
## 📚 Reference Resources
- **FastMCP**: [GitHub](https://github.com/jlowin/fastmcp) - Efficient MCP framework
- **GLM API**: [智谱 AI](https://open.bigmodel.cn) - Powerful domestic large model (recommended as Coder backend)
- **Claude Code**: [Documentation](https://docs.anthropic.com/en/docs/claude-code)
- **OpenCode**: [官方文档](https://opencode.ai/docs) - Open Source AI Coding Agent
- **Oh-My-OpenCode**: [GitHub](https://github.com/code-yeongyu/oh-my-opencode) - OpenCode multi-agent orchestration plugin
## 📄 License
MIT
Connection Info
You Might Also Like
markitdown
MarkItDown-MCP is a lightweight server for converting URIs to Markdown.
servers
Model Context Protocol Servers
Time
A Model Context Protocol server for time and timezone conversions.
Filesystem
Node.js MCP Server for filesystem operations with dynamic access control.
Sequential Thinking
A structured MCP server for dynamic problem-solving and reflective thinking.
git
A Model Context Protocol server for Git automation and interaction.