Content
# Tool List
## Project Introduction
This is an intelligent chatbot project built based on LangGraph and LangChain, with tool calling and ReAct reasoning capabilities. The robot can call various tools to obtain real-time information, such as weather queries, historical event queries, web searches, etc., to provide accurate and timely answers to users. The project particularly demonstrates how to integrate the Model Context Protocol (MCP) server into a ReAct Agent built based on LangGraph, allowing large language models to autonomously call MCP tools.
## Main Features
- 🤖 **Intelligent Dialogue**: Based on large language models for natural language understanding and generation
- 🔧 **Tool Calling**: Supports integration of multiple external tools, including weather queries, search engines, historical event queries, etc.
- 🧠 **ReAct Reasoning**: Adopts a reasoning-action-observation loop mode, providing a structured thinking process
- 📊 **Token Statistics**: Real-time monitoring and statistics of API call token usage
- 💬 **Dialogue Memory**: Supports multi-turn dialogue, maintaining context continuity
- 🌐 **API Services**: Includes FastAPI backend services, supporting user information queries
- 🔌 **MCP Integration**: Supports seamless integration of Model Context Protocol server tools
- ⚡ **Synchronous/Asynchronous**: Provides both synchronous and asynchronous implementations to meet different performance needs
## Project Architecture
The project includes the following core components:
1. **LangGraph ReAct Agent** - Intelligent agent built based on LangGraph, supporting tool calls and reasoning
2. **MCP Server** - Server providing various tool functions, such as BMI calculation, weather queries, etc.
3. **MCP Tool Adapter** - Adapter layer converting MCP tools to LangChain tool format
4. **FastAPI Service** - Provides API services for user information queries
## Project Structure
```
├── sync/ # Synchronous implementation directory
│ ├── agent_chatbot_LCEL.py # LCEL-based agent implementation (synchronous)
│ ├── agent_chatbot_systemmessage.py # System message-based agent implementation (synchronous)
│ ├── fastmcp_server_streamhttp.py # MCP server implementation (synchronous)
│ ├── mcp_tools_adapter.py # MCP tool adapter (synchronous)
│ └── user_api.py # User information API service (synchronous)
│
├── async/ # Asynchronous implementation directory
│ ├── agent_chatbot_async.py # Asynchronous agent implementation
│ ├── fastmcp_server_async.py # Asynchronous MCP server
│ ├── mcp_tools_adapter_async.py # Asynchronous MCP tool adapter
│ └── user_api_async.py # Asynchronous user information API service
│
├── .gitignore # Git ignore file configuration
├── requirements.txt # Dependency package list
└── README.md # Project overview document
```
## Core Concept
The core concept of the entire project is **tool standardization**:
- All tools (local functions, API services, MCP services) must provide standardized interfaces
- Each tool must have a clear `name` (tool name)
- Each tool must have a detailed `description` (tool description)
- Each tool must define a clear parameter structure (`args`)
This standardization enables large models to understand tool functions and call them correctly, while also allowing tools from different sources to be seamlessly integrated.
## Environment Requirements
The project includes two main environments:
### MCP Server Environment
- Python 3.10+ (recommended 3.10.18)
- conda environment: `mcp_env`
### LangGraph Agent Environment
- Python 3.13+ (recommended 3.13.5)
- conda environment: `langgraph_env`
## Dependency Installation
### MCP Server Environment
```bash
# Create MCP environment
conda create -n mcp_env python=3.10.18
conda activate mcp_env
# Install dependencies
pip install fastmcp requests python-dotenv
```
### LangGraph Agent Environment
```bash
# Create LangGraph environment
conda create -n langgraph_env python=3.13.5
conda activate langgraph_env
# Install dependencies
pip install langchain langchain-openai langchain-deepseek langchain-community
pip install langgraph fastapi uvicorn requests python-dotenv
pip install fastmcp # For MCP client functionality
```
### Additional Asynchronous Environment Dependencies
```bash
# Install asynchronous support in LangGraph environment
pip install aiohttp
```
## Environment Variable Configuration
Create a `.env` file and configure necessary API keys:
```
DEEPSEEK_API_KEY=your_deepseek_api_key
TAVILY_API_KEY=your_tavily_api_key
OPENAI_API_KEY=your_openai_api_key # Optional
KIMI_API_KEY=your_kimi_api_key # Optional, for asynchronous version
```
## Usage
### Synchronous Version
#### Step 1: Start MCP Server
First, start the MCP server in the MCP environment:
```bash
conda activate mcp_env
python sync/fastmcp_server_streamhttp.py
```
The server will run on `http://127.0.0.1:8083/my-custom-path`, providing the following tools:
- `calculate_bmi` - Calculate BMI index
- `get_current_time` - Get current time
- `get_weather` - Get weather for a specified city
#### Step 2: Start User API Service (Optional)
If you need to use the user information query function:
```bash
conda activate langgraph_env
python sync/user_api.py
```
The service will run on `http://127.0.0.1:8000`.
#### Step 3: Start LangGraph Agent
Start the agent in the LangGraph environment, and specify the MCP server URL via command-line parameters:
```bash
conda activate langgraph_env
python sync/agent_chatbot_LCEL.py --mcp http://127.0.0.1:8083/my-custom-path
```
The agent will automatically connect to the MCP server, obtain available tools, and add them to the tool list.
### Asynchronous Version
#### Step 1: Start Asynchronous MCP Server
```bash
conda activate mcp_env
python async/fastmcp_server_async.py
```
The server will run on `http://127.0.0.1:8084/my-custom-path`.
#### Step 2: Start Asynchronous User API Service (Optional)
```bash
conda activate langgraph_env
python async/user_api_async.py
```
The service will run on `http://127.0.0.1:8001`.
#### Step 3: Start Asynchronous LangGraph Agent
```bash
conda activate langgraph_env
python async/agent_chatbot_async.py
```
#### Step 4: Run Performance Test (Optional)
Compare the performance differences between synchronous and asynchronous versions:
```bash
python async/agent_chatbot_async.py --perf-test
```
### Command-Line Parameters
#### Synchronous Version Parameters
`agent_chatbot_LCEL.py` supports the following command-line parameters:
- `--mcp <url>`: Main MCP server URL
- `--mcp2 <url>`: Second MCP server URL
- `--no-mcp`: Disable all MCP tools
#### Asynchronous Version Parameters
`agent_chatbot_async.py` supports the following command-line parameters:
- `--mcp <url>`: MCP server URL
- `--no-mcp`: Disable MCP tools
- `--perf-test`: Run performance test
### Interactive Commands
In the chat interface, you can use the following commands:
- `stats`: View token usage statistics
- `quit` or `exit`: Exit the program
## Working Principle
### LangGraph and MCP Integration
This project demonstrates the perfect combination of two different technologies:
1. **LangGraph** - Provides powerful workflow control and state management capabilities, enabling the agent to perform complex reasoning and decision-making
2. **MCP (Model Context Protocol)** - Provides standardized tool interfaces, allowing external tools to be seamlessly called by large models
These two technologies seem independent but are unified through the **MCP tool adapter**:
```
LangGraph Agent <---> MCP Tool Adapter <---> MCP Server
```
### MCP Tool Adapter
The `mcp_tools_adapter.py` file implements an adapter that converts MCP tools to LangChain tool format:
1. Connect to the MCP server and obtain the tool list
2. Create a corresponding LangChain tool for each MCP tool
3. Handle tool calls and result conversion
### Synchronous and Asynchronous Comparison
The project provides both synchronous and asynchronous implementations, each with its advantages:
**Synchronous Implementation**:
- Code is simple and intuitive, easy to understand
- Suitable for simple scenarios and learning
- Executed in order, easy to debug
**Asynchronous Implementation**:
- Higher performance, especially in I/O-intensive operations
- Can handle multiple requests simultaneously
- Reduces waiting time and increases throughput
In performance tests, the asynchronous version usually shows significant performance advantages, especially in scenarios involving multiple network requests.
## Available Tools
### Local Tools
1. **Web Search** (`internet_search_engine`)
- Use Tavily search engine to obtain the latest web information
- Returns up to 2 search results
2. **Date Query** (`get_today`/`get_today_async`)
- Get the current system date
- Format: YYYY-MM-DD
3. **Historical Event Query** (`get_historical_events_on_date`/`get_historical_events_on_date_async`)
- Query historical events for a specified date
- Requires month and day parameters
4. **User Information Query** (`get_user_info`/`get_user_info_async`)
- Query user information from internal API
- Requires starting the FastAPI service
### MCP Server Tools
1. **BMI Calculation** (`calculate_bmi`)
- Calculate body mass index
- Parameters: weight (kg) and height (m)
2. **Current Time** (`get_current_time`)
- Get current system time
- No parameters
3. **Weather Query** (`get_weather`)
- Get weather for a specified city
- Parameters: city name and date
## ReAct Reasoning Mode
The robot adopts a structured reasoning process:
1. **Key Concepts** - Identify core concepts in the task
2. **Concept Decomposition** - Decompose complex concepts into queryable sub-elements
3. **Reflection** - Analyze obtained information and missing parts
4. **Thinking** - Analyze dependencies and query priorities between elements
5. **Reasoning** - Develop information acquisition strategies
6. **Planning** - Develop specific execution steps
7. **Plan Verification** - Verify plan feasibility
8. **Plan Improvement** - Optimize and adjust execution plans
## Example Dialogue
### BMI Calculation Example
```
You: I'm 1.75 meters tall and weigh 70 kilograms, what's my BMI?
ReAct Agent: **Key Concepts**: BMI index
- Height: 1.75 meters
- Weight: 70 kilograms
**Concept Decomposition**:
BMI index
- Height: 1.75 meters
- Weight: 70 kilograms
- BMI calculation formula: weight (kg) / height (m)²
**Reflection**:
A. I have obtained height and weight values, but not BMI.
B. Known information: Height = 1.75 meters, Weight = 70 kilograms.
C. Not obtained: BMI index value.
**Thinking**:
A. I need to obtain BMI index value.
B. I can use the calculate_bmi tool, which requires height and weight as input parameters.
C. Currently, I need to obtain BMI index value.
**Reasoning**:
I have height (1.75 meters) and weight (70 kilograms), so I can directly call the calculate_bmi tool to calculate BMI.
**Plan**:
Call calculate_bmi tool with parameters weight_kg=70, height_m=1.75.
**Plan Verification**:
A. Known constants: Height = 1.75 meters, Weight = 70 kilograms.
B. Current plan does not involve exhaustive file records.
C. Current plan does not depend on other unobtained sub-elements.
D. Current plan does not make assumptions about sub-element values.
E. Sub-tasks not completed, need to continue.
Based on your provided height (1.75 meters) and weight (70 kilograms), your BMI index is 22.86.
This falls within the normal weight range (18.5-24.9), indicating that your weight is beneficial for health. Maintain a good diet and appropriate exercise to maintain this healthy BMI value.
```
### Weather Query Example
```
You: What's the weather like in Beijing today?
ReAct Agent: **Key Concepts**: Today's weather in Beijing
**Concept Decomposition**:
- Today's date
- Beijing weather information
**Reflection**:
A. Need to obtain today's date and Beijing weather information
B. Currently, no information obtained
C. Missing: Current date, Beijing weather data
**Thinking**:
A. Can directly query Beijing weather without getting the date
B. Weather query tool can provide real-time information
**Reasoning**: Directly call the weather query tool to obtain Beijing's real-time weather
**Plan**: Call get_weather tool with parameter "Beijing"
Beijing current weather: Sunny, temperature 25°C, perceived temperature 27°C, humidity 45%. It's a good day!
```
## Extension Development
### Add New MCP Tools
To add new tools to the MCP server, simply add new tool functions in `fastmcp_server_streamhttp.py` or `fastmcp_server_async.py`:
```python
# Synchronous version
@mcp.tool()
def new_tool_function(param1: type, param2: type) -> return_type:
"""Tool description...
Args:
param1: Parameter 1 description
param2: Parameter 2 description
Returns:
Return value description
"""
# Tool implementation...
return result
# Asynchronous version
@mcp.tool()
async def new_tool_function_async(param1: type, param2: type) -> return_type:
"""Tool description...
Args:
param1: Parameter 1 description
param2: Parameter 2 description
Returns:
Return value description
"""
# Asynchronous tool implementation...
await asyncio.sleep(0.01) # Simulate asynchronous operation
return result
```
After adding new tools to the MCP server and restarting, you also need to restart the agent for the new tools to be available to the agent.
### Multi-MCP Server Support
The synchronous version of the agent supports connecting to multiple MCP servers:
```bash
python sync/agent_chatbot_LCEL.py --mcp http://server1:8083/path --mcp2 http://server2:8086/path
```
If two servers provide tools with the same name, consider adding prefixes when loading tools to avoid conflicts.
### Performance Optimization
1. **Cache Mechanism**
- Cache frequently called tool results
- Set cache expiration time
- Implement cache invalidation strategy
2. **Concurrency Control**
- Limit the number of concurrent requests
- Implement request queue and priority
- Add timeout and retry mechanisms
3. **Load Balancing**
- Distribute requests between multiple MCP servers providing the same function
- Implement health check and failover
## Troubleshooting
### MCP Connection Issues
If the agent cannot connect to the MCP server, please check:
1. Is the MCP server running?
2. Is the URL correct?
3. Is the network connection normal?
4. Are proxy settings adjusted?
### Tool Call Failure
If tool calls fail, possible reasons include:
1. Parameter type mismatch
- Large models may return string-formatted numbers, while tools require numeric types
- Solution: Add type conversion logic in the tool adapter
2. Tool description not clear
- If tool descriptions are not detailed enough, large models may not use them correctly
- Solution: Provide detailed parameter descriptions, examples, and usage scenarios
3. Network connection issues
- External APIs may be temporarily unavailable
- Solution: Add error handling and retry mechanisms
### Performance Issues
If you encounter performance issues, consider:
1. Switch to asynchronous version
2. Reduce unnecessary network requests
3. Optimize tool implementation
4. Add cache mechanisms
## Technical Architecture
- **LangGraph**: Used to build complex AI workflow graphs
- **LangChain**: Provides LLM integration and tool management
- **FastMCP**: Provides standardized tool interfaces
- **DeepSeek/OpenAI/Kimi**: Large language model providers
- **FastAPI**: Lightweight web framework for API services
- **Asyncio/Aiohttp**: Asynchronous programming support
## Contribution Guidelines
Welcome to submit issues and pull requests to improve the project. Before contributing code, ensure:
1. Code complies with project coding standards
2. Add necessary comments and documentation
3. Test the correctness of new features
4. Update relevant documentation
## License
This project uses the MIT license, see LICENSE file for details.
## Contact Information
If you have questions or suggestions, please contact:
- Submit GitHub Issue: https://github.com/mr-jay-wei/my-langgraph-agent
- Send email to project maintainer Jay Wei: xiaofeng.0209@gmail.com
*Last updated: 2025-07-07*
Connection Info
You Might Also Like
awesome-mcp-servers
A collection of MCP servers.
git
A Model Context Protocol server for Git automation and interaction.
cc-switch
All-in-One Assistant for Claude Code, Codex & Gemini CLI across platforms.
claude-code-handbook
Comprehensive guide for writing professional Claude Standard prompts with...
MCP-Gateway
Unify multiple MCP Servers & Skills into a single gateway — with proxy...
MCP-Gateway
Unify multiple MCP Servers & Skills into a single gateway — with proxy...