Content
# MCP (Gugudan Server) Study Project
This project serves as an example to practice the **MCP architecture**, including Gugudan servers implemented in Python (FastAPI) and Node.js (Express), as well as a client that integrates with LLM (Large Language Model).
We have organized the entire folder structure, source explanations, operational principles, practical methods, API examples, and learning points to make it easy for students, developers, and users to follow along.
Install LM Studio locally to run it, and refer to `llmstudio_client.py` to write code that sends questions to the LLM.
---
## 1. Overall Folder/File Structure
```
backend/
├── app/
│ ├── mcp_gugudan_server.py # Python Gugudan MCP Server (FastAPI)
│ ├── client.py # MCP Server/LLM Smart Routing Client
│ ├── lmstudio_client.py # LM Studio (Local LLM) Integration Module
│ └── __init__.py
├── js_mcp_gugudan_server/
│ ├── server.js # Node.js Gugudan MCP Server (Express)
│ ├── package.json # Node.js Dependencies
│ └── README.md # How to Run the JS Server
├── requirements.txt # Python Dependencies
├── .gitignore # Exclude unnecessary files
├── .python-version # Specify Python version
└── README.md # (Top Level) Overall Project Description
```
---
## 2. Source Explanation and Architecture Flow
### (1) Python MCP Gugudan Server
- **mcp_gugudan_server.py**
- Calculates and returns the Gugudan (multiplication table) from 1 to 9 at the `/mcp/gugudan` endpoint.
- Example: { "query": "Tell me the 3 times table" } → Returns the 3 times table results.
### (2) Node.js MCP Gugudan Server
- **server.js**
- Provides POST `/mcp/gugudan` just like the Python server.
- Based on Express, the API response format is also the same.
### (3) Smart Client
- **client.py**
- If the question is related to Gugudan, it first requests the MCP server; otherwise, it automatically falls back to the LLM (LM Studio).
- Example: "Tell me the 9 times table" → MCP server, "What is the population of South Korea?" → LLM.
- **lmstudio_client.py**
- Communicates with the LM Studio API (local LLM responses).
- LM Studio is a free LLM server that runs directly on your PC and provides an OpenAI-compatible REST API.
- This project is designed to receive LLM responses by running LM Studio locally (e.g., `http://localhost:1234/v1/chat/completions`).
### (4) Others
- **requirements.txt / package.json**: Dependencies for each language.
- **.gitignore**: Includes temporary files for Python, Node.js, editors, and OS.
#### Architecture Flow Diagram
```
[User Question]
↓
[client.py]
├─(Related to Gugudan)─→ [MCP Server (Python/JS)]
└─(Other Questions)────→ [LLM (LM Studio)]
```
---
## 3. Operation and Practical Method (Step by Step)
### (A) Practicing with Python MCP Gugudan Server
1. Install dependencies
```bash
pip install -r requirements.txt
```
2. Run the server
```bash
python -m app.mcp_gugudan_server
# or
uv run python -m app.mcp_gugudan_server
```
3. Run the client
```bash
python -m app.client
```
### (B) Practicing with Node.js MCP Gugudan Server
1. Change directory and install dependencies
```bash
cd js_mcp_gugudan_server
npm install
```
2. Run the server
```bash
npm start
```
### (C) Direct API Testing
```bash
curl -X POST http://localhost:8000/mcp/gugudan -H "Content-Type: application/json" -d '{"query": "Tell me the 3 times table"}'
```
### (D) Testing Functionality with client.py
#### 1. Run client.py
```bash
python -m app.client
# or
uv run python -m app.client
```
#### 2. Example Output (Response)
```
Question: Tell me the 3 times table
[MCP Server Response]
3 x 1 = 3
3 x 2 = 6
...
3 x 9 = 27
Question: What is the population of South Korea?
-> This question is not related to the MCP server. I will answer with LLM.
[LLM Direct Response]
As of December 31, 2023, the population of South Korea is approximately 51,814,000.
Question: Tell me the 80 times table
-> This question is not related to the MCP server. I will answer with LLM.
[LLM Direct Response]
80 x 1 = 80
80 x 2 = 160
...
80 x 9 = 720
```
- The MCP server directly calculates and returns questions related to the Gugudan (1 to 9 times tables).
- Other questions (common knowledge, 80 times table, etc.) automatically fallback to the LLM (LM Studio) for answers.
- You can easily test the actual routing/response behavior through client.py.
---
## 4. Learning Points & Practical Tips
- **MCP Architecture**: Practice smartly routing multiple processing components (servers/LLM) based on the situation.
- **API Design**: Practice designing the same REST API in both Python and Node.js.
- **Error Handling**: Gracefully return errors for invalid requests (e.g., "Tell me the 80 times table").
- **Scalability**: Easily extend the same structure for mathematical operations beyond Gugudan and other AI functionalities.
- **Real-world Integration**: Includes practical API usage methods such as curl and client code.
---
## 5. References/Additions
- `.gitignore` includes temporary files for Python/Node.js/editors/OS.
- For examples of LM Studio (Llama3, etc.) API integration, refer to `app/lmstudio_client.py`.
- For the Node.js version, refer to `js_mcp_gugudan_server/README.md`.
---
## [Appendix] LM Studio Configuration and Usage Guide
### What is LM Studio?
- **LM Studio** is a free LLM (Large Language Model) server that can be run directly on your PC.
- It provides a REST API compatible with OpenAI API (e.g., `http://localhost:1234/v1/chat/completions`).
- You can download various models such as GPT-3 and Llama3 to use privately on your local machine.
### Usage in This Project
- In `app/lmstudio_client.py`, questions are sent to the LM Studio API, and the LLM generates responses.
- The client (client.py) automatically routes questions that the MCP server cannot handle, such as Gugudan, to LM Studio.
- LM Studio must be running on your PC, with the default port set to 1234.
### Example LM Studio Setup
1. Download and install from the [LM Studio official site](https://lmstudio.ai/).
2. After launching LM Studio, select and download the desired model (e.g., Llama3).
3. Enable the "OpenAI Compatible API" feature (toggle in settings).
4. Once the server is running, API requests can be made to `http://localhost:1234/v1/chat/completions`.
### Example Code (app/lmstudio_client.py)
```python
LMSTUDIO_API_URL = "http://localhost:1234/v1/chat/completions"
# ... rest omitted ...
```
---
This document is designed to be practically helpful for students, developers, and users who are learning about the MCP architecture and server-client-LLM integration for the first time. If you have any questions or expansion ideas, feel free to ask!
Connection Info
You Might Also Like
markitdown
MarkItDown-MCP is a lightweight server for converting URIs to Markdown.
servers
Model Context Protocol Servers
Time
A Model Context Protocol server for time and timezone conversions.
Filesystem
Node.js MCP Server for filesystem operations with dynamic access control.
Sequential Thinking
A structured MCP server for dynamic problem-solving and reflective thinking.
git
A Model Context Protocol server for Git automation and interaction.