Content
<div align="center">
<h1>K8M</h1>
</div>
[English](README_en.md) | [中文](README.md)
[](https://github.com/weibaohui/k8m/blob/master/LICENSE)
[](https://archestra.ai/mcp-catalog/weibaohui__k8m)

**k8m** is an AI-driven Mini Kubernetes AI Dashboard lightweight console tool designed to simplify cluster management. It is built on AMIS and uses [
`kom`](https://github.com/weibaohui/kom) as the Kubernetes API client. **k8m** comes with Qwen2.5-Coder-7B, supports interaction with the deepseek-ai/DeepSeek-R1-Distill-Qwen-7B model, and allows integration with your own private large models (including ollama).
### Demo
[DEMO](http://107.150.119.151:3618)
[DEMO-InCluster Mode](http://107.150.119.151:31999)
Username and password: demo/demo
### Documentation
- For detailed configuration and usage instructions, please refer to the [documentation](docs/README.md).
- For the changelog, please refer to the [CHANGELOG](CHANGELOG.md).
- For customizing large model parameters and configuring private large models, please refer to [Self-hosted/Custom Large Model Support](docs/use-self-hosted-ai.md) and [Ollama Configuration](docs/ollama.md).
- For detailed configuration options, please refer to [Configuration Options Description](docs/config.md).
- For database configuration, please refer to [Database Configuration Description](docs/database.md).
- DeepWiki documentation: [Development Design Document](https://deepwiki.com/weibaohui/k8m)
### Key Features
- **Miniaturized Design**: All functionalities are integrated into a single executable file, making deployment easy and usage simple.
- **User-Friendly**: A friendly user interface and intuitive operation process make Kubernetes management easier. Supports standard k8s, AWS EKS, k3s, kind, k0s, and other cluster types.
- **Efficient Performance**: The backend is built with Golang, and the frontend is based on Baidu AMIS, ensuring high resource utilization and fast response times.
- **AI-Driven Integration**: Implements word explanation, resource guidance, YAML attribute auto-translation, Describe information interpretation, log AI consultation, command execution recommendations based on ChatGPT, and integrates [k8s-gpt](https://github.com/k8sgpt-ai/k8sgpt) functionality, providing intelligent support for managing k8s in Chinese.
- **MCP Integration**: Visual management of MCP, enabling large model tool calls, with 49 built-in k8s multi-cluster MCP tools that can be combined for over a hundred cluster operations, serving as an MCP Server for other large model software. Easily manage large models in k8s with detailed records of each MCP call. Supports mainstream services of mcp.so.
- **MCP Permission Integration**: Multi-cluster management permissions are integrated with MCP large model call permissions. In summary: whoever uses the large model executes MCP with their permissions. Safe usage without worries, avoiding unauthorized operations.
- **Multi-Cluster Management**: Automatically identifies clusters using InCluster mode, scans configuration files in the same directory after configuring the kubeconfig path, and registers multiple clusters for management.
- **Multi-Cluster Permission Management**: Supports authorization for users and user groups, with permissions granted per cluster, including read-only, Exec command, and cluster administrator permissions. Users in an authorized group receive corresponding permissions. Supports setting namespace black and white lists.
- **Supports Latest k8s Features**: Supports features like APIGateway, OpenKruise, etc.
- **Pod File Management**: Supports browsing, editing, uploading, downloading, and deleting files within Pods, simplifying daily operations.
- **Pod Runtime Management**: Supports real-time viewing of Pod logs, downloading logs, and executing Shell commands directly within Pods. Supports grep -A -B highlighted searches.
- **Open API**: Supports creating API keys for external access, providing a Swagger interface management page.
- **Cluster Inspection Support**: Supports scheduled inspections, custom inspection rules, and Lua script rules. Supports sending notifications to DingTalk groups, WeChat groups, and Feishu groups.
- **CRD Management**: Automatically discovers and manages CRD resources, improving work efficiency.
- **Helm Marketplace**: Supports freely adding Helm repositories, one-click installation, uninstallation, and upgrading of Helm applications, with automatic updates.
- **Cross-Platform Support**: Compatible with Linux, macOS, and Windows, supporting various architectures like x86 and ARM, ensuring seamless operation across platforms.
- **Multi-Database Support**: Supports multiple databases including SQLite, MySQL, PostgreSQL, etc.
- **Fully Open Source**: All source code is open, with no restrictions, allowing for free customization and extension, and commercial use.
The design philosophy of **k8m** is "AI-driven, lightweight and efficient, simplifying complexity," helping developers and operations personnel quickly get started and easily manage Kubernetes clusters.

## **Run**
1. **Download**: Download the latest version from [GitHub release](https://github.com/weibaohui/k8m/releases).
2. **Run**: Start with the command `./k8m`, and access [http://127.0.0.1:3618](http://127.0.0.1:3618).
3. **Login Username and Password**:
- Username: `k8m`
- Password: `k8m`
- Please remember to change the username and password after going live and enable two-step verification.
4. **Parameters**:
```shell
Usage of ./k8m:
--enable-temp-admin Enable temporary admin account configuration, default is off
--admin-password string Admin password, effective after enabling temporary admin account configuration
--admin-username string Admin username, effective after enabling temporary admin account configuration
--print-config Print configuration information (default false)
--connect-cluster Automatically connect to existing clusters when starting, default is off
-d, --debug Debug mode
--in-cluster Automatically register the host cluster, default is enabled
--jwt-token-secret string Secret used to generate JWT token after login (default "your-secret-key")
-c, --kubeconfig string Path to kubeconfig file (default "/root/.kube/config")
--kubectl-shell-image string Kubectl Shell image. Must include the kubectl command (default "bitnami/kubectl:latest")
--log-v int klog log level klog.V(2) (default 2)
--login-type string Login method, password, oauth, token, etc., default is password (default "password")
--image-pull-timeout Node Shell, Kubectl Shell image pull timeout. Default is 30 seconds
--node-shell-image string NodeShell image. Must include `nsenter` command (default "alpine:latest")
-p, --port int Listening port (default 3618)
-v, --v Level klog log level (default 2)
```
You can also start directly using docker-compose (recommended):
```yaml
services:
k8m:
container_name: k8m
image: registry.cn-hangzhou.aliyuncs.com/minik8m/k8m
restart: always
ports:
- "3618:3618"
environment:
TZ: Asia/Shanghai
volumes:
- ./data:/app/data
```
After starting, access port `3618`, default username: `k8m`, default password: `k8m`.
If you want to quickly experience the online environment, you can visit: [k8m](https://cnb.cool/znb/qifei/-/tree/main/letsfly/justforfun/k8m)
## **ChatGPT Configuration Guide**
### Built-in GPT
Starting from version v0.0.8, the built-in GPT does not require configuration.
If you need to use your own GPT, please refer to the following documents.
- [Self-hosted/Custom Large Model Support](use-self-hosted-ai.md) - How to use self-hosted models
- [Ollama Configuration](ollama.md) - How to configure and use Ollama large models.
### **ChatGPT Status Debugging**
If the parameters are set but still have no effect, try using `./k8m -v 6` to get more debugging information.
It will output the following information, and by checking the logs, you can confirm whether ChatGPT is enabled.
```go
ChatGPT status: true
ChatGPT enabled key: sk-hl**********************************************, url: https://api.siliconflow.cn/v1
ChatGPT model set in environment variables: Qwen/Qwen2.5-7B-Instruc
```
### **ChatGPT Account**
This project integrates the [github.com/sashabaranov/go-openai](https://github.com/sashabaranov/go-openai) SDK.
It is recommended to use the service from [Silicon Flow](https://cloud.siliconflow.cn/) for access within China.
After logging in, create an API_KEY at [https://cloud.siliconflow.cn/account/ak](https://cloud.siliconflow.cn/account/ak).
## **k8m Supports Environment Variable Settings**
k8m supports flexible configuration through environment variables and command-line parameters. The main parameters are as follows:
| Environment Variable | Default Value | Description |
| ------------------------- | ------------------------- | ------------------------------------------------------------ |
| `PORT` | `3618` | Listening port number |
| `KUBECONFIG` | `~/.kube/config` | Path to the `kubeconfig` file, automatically scans all configuration files in the same directory |
| `ANY_SELECT` | `"true"` | Whether to enable arbitrary selection word explanation, default is enabled (default true) |
| `LOGIN_TYPE` | `"password"` | Login method (e.g., `password`, `oauth`, `token`) |
| `ENABLE_TEMP_ADMIN` | `"false"` | Whether to enable temporary admin account configuration, default is off. Used for first login or password recovery |
| `ADMIN_USERNAME` | | Admin username, effective after enabling temporary admin account configuration |
| `ADMIN_PASSWORD` | | Admin password, effective after enabling temporary admin account configuration |
| `DEBUG` | `"false"` | Whether to enable `debug` mode |
| `LOG_V` | `"2"` | Log output level, same as klog usage |
| `JWT_TOKEN_SECRET` | `"your-secret-key"` | Secret for generating JWT Token |
| `KUBECTL_SHELL_IMAGE` | `bitnami/kubectl:latest` | Kubectl shell image address |
| `NODE_SHELL_IMAGE` | `alpine:latest` | Node shell image address |
| `IMAGE_PULL_TIMEOUT` | `30` | Node shell, kubectl shell image pull timeout (seconds) |
| `CONNECT_CLUSTER` | `"false"` | Whether to automatically connect to discovered clusters after starting the program, default is off |
| `PRINT_CONFIG` | `"false"` | Whether to print configuration information |
For detailed parameter descriptions and more configuration methods, please refer to [docs/readme.md](docs/README.md).
These environment variables can be set when running the application, for example:
```sh
export PORT=8080
export GIN_MODE="release"
./k8m
```
For other parameters, please refer to [docs/readme.md](docs/README.md).
## Running in a Containerized k8s Cluster
Use [KinD](https://kind.sigs.k8s.io/docs/user/quick-start/) or [MiniKube](https://minikube.sigs.k8s.io/docs/start/) to install a small k8s cluster.
## KinD Method
* Create a KinD Kubernetes cluster
```
brew install kind
```
* Create a new Kubernetes cluster:
```
kind create cluster --name k8sgpt-demo
```
## Deploy k8m to the Cluster for Experience
### Installation Script
```docker
kubectl apply -f https://raw.githubusercontent.com/weibaohui/k8m/refs/heads/main/deploy/k8m.yaml
```
* Access:
The default uses nodePort, please access port 31999, or configure Ingress yourself.
http://NodePortIP:31999
### Modify Configuration
It is recommended to modify through environment variables. For example, add env parameters in deploy.yaml.
## Development and Debugging
If you want to develop and debug locally, please execute a local frontend build first to automatically generate the dist directory. Since this project uses binary embedding, it will report an error without the dist frontend.
#### Step 1: Compile Frontend
```bash
cd ui
pnpm run build
```
#### Compile and Debug Backend
```bash
# Download dependencies
go mod tidy
# Run
air
# or
go run main.go
# Listening on localhost:3618
```
#### Frontend Hot Reload
```bash
cd ui
pnpm run dev
# Vite service will listen on localhost:3000
# Vite forwards backend access to port 3618
```
Access http://localhost:3000
### HELP & SUPPORT
If you have any further questions or need additional help, please feel free to contact me!
### Special Thanks
[zhaomingcheng01](https://github.com/zhaomingcheng01): Provided many high-quality suggestions, making outstanding contributions to the usability of k8m~
[La0jin](https://github.com/La0jin): Provided online resources and maintenance, greatly enhancing the presentation of k8m.
[eryajf](https://github.com/eryajf): Provided us with very useful GitHub actions, adding automation for versioning, building, and releasing k8m.
## Contact Me
WeChat (The Sun of Rome) Search ID: daluomadetaiyang, note k8m.
<br><img width="214" alt="Image" src="https://github.com/user-attachments/assets/166db141-42c5-42c4-9964-8e25cf12d04c" />
## WeChat Group

## QQ Group

Connection Info
You Might Also Like
semantic-kernel
Semantic Kernel is an SDK for building and deploying AI agents and systems.

apisix
Apache APISIX is an API Gateway that provides dynamic routing and management...
opik
Opik is a versatile tool for managing and optimizing machine learning experiments.
claude-flow
Claude-Flow v2.5.0 is an AI orchestration platform for seamless integration.
convex-backend
Convex is an open-source reactive database for web app developers, enabling...
mcp
A suite of specialized MCP servers for optimizing AWS usage.