Content
# SSE WeChat Mini Program
WeChat Mini Program client based on SSE service, providing community highlights, chat, and personal information functions.
## Project Overview
This project is an SSE (Server-Sent Events) client based on WeChat Mini Program, mainly including three core pages: community highlights page, chat page, and personal information page.
## Features
- **Community Highlights Page**: Displays various selected MCP applications and popular recommendations
- **Chat Page**: Implements real-time chat function based on SSE, supports MCP services such as AMap
- **Personal Information Page**: User WeChat account login, logout, and chat count management
- **SSE Real-time Communication**: Supports real-time message push between server and client
- **Tool Invocation Process**: Implements a complete process of tool identification, parameter parsing, invocation execution, and result processing
- **User Authentication System**: Supports WeChat account login, manages user chat counts
## Technical Implementation
- Uses WeChat Mini Program native development framework
- SSE server is implemented based on Express framework
- Chat function uses SSE technology to achieve real-time communication between server and client
- Integrates AMap API to implement map-related functions
- Uses OpenAI interface to process natural language requests and tool invocation identification
- Follows WeChat Mini Program design specifications and development best practices
## Project Structure
```
.
├── miniprogram/ # WeChat Mini Program client
│ ├── app.js # Mini Program entry file
│ ├── app.json # Mini Program configuration file
│ ├── app.wxss # Global style file
│ ├── images/ # Image resources
│ ├── pages/ # Page files
│ │ ├── community/ # Community highlights page
│ │ │ ├── index.js # Community page logic
│ │ │ ├── index.json # Page configuration
│ │ │ ├── index.wxml # Page structure
│ │ │ └── index.wxss # Page style
│ │ ├── chat/ # Chat page
│ │ │ ├── index.js # Chat page logic
│ │ │ ├── index.json # Page configuration
│ │ │ ├── index.wxml # Page structure
│ │ │ └── index.wxss # Page style
│ │ ├── profile/ # Personal information page
│ │ │ ├── index.js # Personal page logic
│ │ │ ├── index.json # Page configuration
│ │ │ ├── index.wxml # Page structure
│ │ │ └── index.wxss # Page style
│ │ └── travel-guide/ # Travel guide rendering page
│ │ ├── index.js # Guide page logic
│ │ ├── index.json # Page configuration
│ │ ├── index.wxml # Page structure
│ │ └── index.wxss # Page style
│ └── utils/ # Utility functions
│ ├── sseClient.js # SSE client service
│ ├── markdown.js # Markdown parsing tool
│ └── mcpConfig.js/ # MCP SSE configuration (Server id can be obtained from Cherry Studio)
├── server_back/package/ # SSE server
│ ├── sse_proxy_server.ts # Main file for SSE proxy server
│ ├── package.json # Server dependency configuration
│ └── .env # Environment variable configuration
├── package.json # Project dependency configuration
├── deploy.sh # Deployment script
├── 微信开发者工具配置.md # Configuration documentation
└── README.md # Project documentation
```
## SSE Server
The SSE server is implemented based on Express and provides the following functions:
- **SSE Real-time Communication**: Maintains a long connection with the client and pushes messages in real-time
- **AI Chat Capability**: Integrates OpenAI API to provide intelligent dialogue function
### Proxy Server
Location: server_back/package/sse_proxy_server.ts
### API Interface
- `POST /sse-chat`: Creates an SSE connection for real-time chat
- `POST /chat`: Regular chat API, does not use SSE connection
- `GET /status`: Gets the server status
### Tool Invocation Example
The following is an actual case of the system processing AMap weather query:
```
Received SSE chat request
OpenAI response: Tool invocation request mcp_amap_maps_maps_weather
Processing tool invocation: mcp_amap_maps_maps_weather
Tool invocation parameters: { city: '深圳市' }
Tool invocation result: {
"status": "1",
"count": "1",
"info": "OK",
"lives": [
{
"province": "广东",
"city": "深圳市",
"adcode": "440300",
"weather": "阴",
"temperature": "26",
"winddirection": "西南",
"windpower": "≤3",
"humidity": "69",
"reporttime": "2025-04-09 11:00:36"
}
]
}
Final response: The weather in Shenzhen yesterday was cloudy, the temperature was 26°C, the southwest wind force was ≤3, and the humidity was 69%.
```
## SSE Client
The SSE client is implemented in the WeChat Mini Program and provides the following functions:
- **Chunked Data Reception**: Simulates the data reception method of SSE
- **Event Handling**: Parses and processes various events sent by the server
- **Tool Invocation**: Supports the invocation of tools such as AMap and the display of results
- **Real-time Rendering**: Renders the information returned by the server to the chat interface in real-time
### Client API
- `createSseConnection()`: Creates an SSE connection
- `sendChatMessage()`: Sends a regular chat message
- `checkServerStatus()`: Checks the server status
## Startup Method
### Start Server
```bash
# Install dependencies and start the server
npm run start-server
# Or use the debug script
./debug.sh
```
The server runs on port 3091 by default. If the port is occupied, it will automatically try to release it.
### Run Mini Program
1. Open the project with WeChat Developer Tools
2. In "Details" -> "Local Settings", check "Do not verify legal domain names"
3. Ensure the server is running normally
4. Preview the effect in the simulator or perform real machine debugging
## Development Notes
- Follow the WeChat Mini Program resource size limit, images and audio resources should not exceed 200K
- Use placeholder images instead of larger image resources
- SSE connection requires the server to support cross-domain requests
- When developing locally, make sure that "Do not verify legal domain names" is checked in WeChat Developer Tools
- When debugging on a real machine, the server address needs to be changed to the computer's local area network IP
## User Authentication System
This project implements a complete WeChat account login and chat count management function:
### Login Function
- **WeChat Account Login**: Use WeChat Mini Program's official `wx.getUserProfile` API to get user information
- **User Information Display**: Display user avatar, nickname, and other information on the personal page
- **Logout**: Supports users to log out of the current account
### Chat Count Management
- **Initial Login Gift**: Users are automatically given 200 chat counts when they log in for the first time
- **Consumption Mechanism**: Each chat consumes 1 chat count
- **Count Display**: Real-time display of remaining chat counts on the personal page and chat page
### Cloud Development Integration
- **Cloud Functions**: Use WeChat Cloud Development's cloud functions to implement user authentication and data storage
- **Database**: Use Cloud Development database to store user information and chat counts
- **Local Debugging**: Supports local debugging mode for easy development and testing
## Future Plans
- Add more MCP service types
- Optimize the stability of SSE connections
- Enhance the visual display of tool invocation results
- Add chat count recharge function
- Implement user chat history storage
## Reference Documents
- [WeChat Mini Program Development Documentation](https://developers.weixin.qq.com/miniprogram/dev/framework/)
- [Server-Sent Events Tutorial](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events)
- [OpenAI API Documentation](https://platform.openai.com/docs/api-reference)
# MCP SSE Proxy Server
This server is a proxy server for the Model Context Protocol (MCP). It uses Server-Sent Events (SSE) to communicate with the MCP server and integrates OpenAI GPT-4o to intelligently select and call tools.
## Features
- Connects to the MCP server and manages the connection
- Uses OpenAI LLM to intelligently select the appropriate tools to handle user requests
- Supports multi-turn tool calls and integrates the results
- Automatically cleans up inactive connections
## Installation
```bash
npm install
```
## Configuration
1. Copy the `.env.example` file to `.env`
2. Configure the OpenAI API key and other settings in the `.env` file
```
OPENAI_API_KEY=your_openai_api_key_here
OPENAI_MODEL=gpt-4o
OPENAI_BASE_URL=https://api.openai.com/v1
PORT=3091
```
## Usage
Start the server:
```bash
npm start
```
### API Endpoints
1. **Create Connection** - `POST /connect`
```json
{
"sseUrl": "https://example-sse-server.com/sse"
}
```
2. **Send Message** - `POST /send/:connectionId`
```json
{
"message": "I want to know the current time in New York"
}
```
Or
```json
{
"type": "tool_call",
"tool": "get_current_time",
"args": { "timezone": "America/New_York" }
}
```
3. **Get Available Tools** - `GET /tools/:connectionId`
4. **Close Connection** - `DELETE /connection/:connectionId`
5. **Check Server Status** - `GET /status`
## Processing Flow
1. User message is sent to the server
2. The server uses GPT-4o to analyze the message
3. LLM selects the appropriate tool and calls it
4. The server executes the tool call and returns the result
5. If needed, LLM may select more tool calls
6. Finally, LLM integrates all tool call results and provides a complete response
# WeChat Mini Program MCP Server Deployment Instructions
## Service Components
This project includes two main server-side components:
1. **Login Server (LoginServer)** - Handles WeChat Mini Program login authentication and user management
2. **MCP Proxy Service (Package)** - Provides MCP (Model Context Protocol) proxy functionality
## Deployment
### Prerequisites
- Node.js environment (v14+)
- PM2 process manager
- Nginx
- The server needs to have a public IP
- Domain name (if HTTPS is required)
### Deployment Script
The project provides an automated deployment script `deploy.sh`. Execute the following command to deploy:
```bash
chmod +x deploy.sh
./deploy.sh
```
The deployment script will:
1. Create a local build directory
2. Package the server-side code
3. Upload to the target server
4. Install dependencies
5. Configure Nginx reverse proxy
6. Start the service using PM2
7. Handle environment variable configuration
### Environment Variable Configuration
After deployment, you need to configure the environment variables for the two services:
**Login Server (.env)**
```
# Service configuration
PORT=3090
NODE_ENV=production
# JWT configuration
JWT_SECRET=your_jwt_secret_here
JWT_EXPIRES_IN=7d
# WeChat Mini Program configuration
WX_APPID=your_wx_appid_here
WX_SECRET=your_wx_secret_here
# Data storage configuration
DATA_DIR=data
```
**MCP Proxy Service (.env)**
```
# Service configuration
PORT=3091
NODE_ENV=production
# OpenAI configuration
OPENAI_API_KEY=your_openai_api_key_here
OPENAI_MODEL=gpt-4o
OPENAI_BASE_URL=https://api.openai.com/v1
```
### Manual Deployment Steps
If you need to deploy manually, please refer to the following steps:
1. Upload the server-side code to the server
2. Install dependencies
```bash
cd loginServer && npm install
cd package && npm install && npm run build
```
3. Configure environment variables
4. Start the service
```bash
cd loginServer && pm2 start app.js --name wxapp-login-server
cd package && pm2 start dist/sse_proxy_server.js --name mcp-proxy-server
```
5. Configure Nginx reverse proxy
### Nginx Configuration Example
```nginx
server {
listen 80;
server_name your-domain.com;
# Login Server
location /api/ {
proxy_pass http://localhost:3090;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
# MCP Proxy Server
location / {
proxy_pass http://localhost:3091;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
```
## Service Description
### Login Service (LoginServer)
The login service is responsible for handling WeChat Mini Program login authentication and user management. The main API endpoints include:
- `/api/user/login` - User login
- `/api/user/info` - Get/update user information
- `/api/user/chat-credits` - Manage chat counts
For detailed API documentation, please refer to `server_back/loginServer/README.md`
### MCP Proxy Service (Package)
The MCP proxy service provides the ability to interact with AI models through the SSE connection protocol. The main endpoints include:
- `/discover-mcp` - Discover MCP server
- `/connect` - Create connection
- `/send/:connectionId` - Send message
- `/tools/:connectionId` - List available tools
## Troubleshooting
If you encounter deployment problems, please check:
1. Server logs
```bash
pm2 logs
```
2. Nginx logs
```bash
tail -f /var/log/nginx/error.log
```
3. Ensure that the environment variables are configured correctly
4. Check whether the required ports are open
Connection Info
You Might Also Like
awesome-mcp-servers
A collection of MCP servers.
git
A Model Context Protocol server for Git automation and interaction.
Appwrite
Build like a team of hundreds
oh-my-opencode
Background agents · Curated agents like oracle, librarians, frontend...
TrendRadar
TrendRadar: Your hotspot assistant for real news in just 30 seconds.
cc-switch
All-in-One Assistant for Claude Code, Codex & Gemini CLI across platforms.