Content
# MCP Web UI
MCP Web UI is a web-based user interface that serves as a Host within the Model Context Protocol (MCP) architecture. It provides a powerful and user-friendly interface for interacting with Large Language Models (LLMs) while managing context aggregation and coordination between clients and servers.
## 🌟 Overview
MCP Web UI is designed to simplify and enhance interactions with AI language models by providing:
- A unified interface for multiple LLM providers
- Real-time, streaming chat experiences
- Flexible configuration and model management
- Robust context handling using the MCP protocol
### Demo Video
[](https://www.youtube.com/watch?v=DnC-z0CpRpM)
## 🚀 Features
- 🤖 **Multi-Provider LLM Integration**:
- Anthropic (Claude models)
- OpenAI (GPT models)
- Ollama (local models)
- OpenRouter (multiple providers)
- 💬 **Intuitive Chat Interface**
- 🔄 **Real-time Response Streaming** via Server-Sent Events (SSE)
- 🔧 **Dynamic Configuration Management**
- 📊 **Advanced Context Aggregation**
- 💾 **Persistent Chat History** using BoltDB
- 🎯 **Flexible Model Selection**
## 📋 Prerequisites
- Go 1.23+
- Docker (optional)
- API keys for desired LLM providers
## 🛠 Installation
### Quick Start
1. Clone the repository:
```bash
git clone https://github.com/MegaGrindStone/mcp-web-ui.git
cd mcp-web-ui
```
2. Configure your environment:
```bash
mkdir -p $HOME/.config/mcpwebui
cp config.example.yaml $HOME/.config/mcpwebui/config.yaml
```
3. Set up API keys:
```bash
export ANTHROPIC_API_KEY=your_anthropic_key
export OPENAI_API_KEY=your_openai_key
export OPENROUTER_API_KEY=your_openrouter_key
```
### Running the Application
#### Local Development
```bash
go mod download
go run ./cmd/server/main.go
```
#### Docker Deployment
```bash
docker build -t mcp-web-ui .
docker run -p 8080:8080 \
-v $HOME/.config/mcpwebui/config.yaml:/app/config.yaml \
-e ANTHROPIC_API_KEY \
-e OPENAI_API_KEY \
-e OPENROUTER_API_KEY \
mcp-web-ui
```
## 🔧 Configuration
The configuration file (`config.yaml`) provides comprehensive settings for customizing the MCP Web UI. Here's a detailed breakdown:
### Server Configuration
- `port`: The port on which the server will run (default: 8080)
- `logLevel`: Logging verbosity (options: debug, info, warn, error; default: info)
- `logMode`: Log output format (options: json, text; default: text)
### Prompt Configuration
- `systemPrompt`: Default system prompt for the AI assistant
- `titleGeneratorPrompt`: Prompt used to generate chat titles
### LLM (Language Model) Configuration
The `llm` section supports multiple providers with provider-specific configurations:
#### Common LLM Parameters
- `provider`: Choose from: ollama, anthropic, openai, openrouter
- `model`: Specific model name (e.g., 'claude-3-5-sonnet-20241022')
- `parameters`: Fine-tune model behavior:
- `temperature`: Randomness of responses (0.0-1.0)
- `topP`: Nucleus sampling threshold
- `topK`: Number of highest probability tokens to keep
- `frequencyPenalty`: Reduce repetition of token sequences
- `presencePenalty`: Encourage discussing new topics
- `maxTokens`: Maximum response length
- `stop`: Sequences to stop generation
- And more provider-specific parameters
#### Provider-Specific Configurations
- **Ollama**:
- `host`: Ollama server URL (default: http://localhost:11434)
- **Anthropic**:
- `apiKey`: Anthropic API key (can use ANTHROPIC_API_KEY env variable)
- `maxTokens`: Maximum token limit
- **OpenAI**:
- `apiKey`: OpenAI API key (can use OPENAI_API_KEY env variable)
- **OpenRouter**:
- `apiKey`: OpenRouter API key (can use OPENROUTER_API_KEY env variable)
### Title Generator Configuration
The `genTitleLLM` section allows separate configuration for title generation, defaulting to the main LLM if not specified.
### MCP Server Configurations
- `mcpSSEServers`: Configure Server-Sent Events (SSE) servers
- `url`: SSE server URL
- `maxPayloadSize`: Maximum payload size
- `mcpStdIOServers`: Configure Standard Input/Output servers
- `command`: Command to run server
- `args`: Arguments for the server command
### Example Configuration Snippet
```yaml
port: 8080
logLevel: info
systemPrompt: You are a helpful assistant.
llm:
provider: anthropic
model: claude-3-5-sonnet-20241022
parameters:
temperature: 0.7
maxTokens: 1000
genTitleLLM:
provider: openai
model: gpt-3.5-turbo
```
## 🏗 Project Structure
- `cmd/`: Application entry point
- `internal/handlers/`: Web request handlers
- `internal/models/`: Data models
- `internal/services/`: LLM provider integrations
- `static/`: Static assets (CSS)
- `templates/`: HTML templates
## 🤝 Contributing
1. Fork the repository
2. Create a feature branch
3. Commit your changes
4. Push and create a Pull Request
## 📄 License
MIT License
Connection Info
You Might Also Like
Git
Model Context Protocol Servers
TrendRadar
🎯 Say goodbye to information overload. AI helps you understand news hotspots...
repomix
📦 Repomix is a powerful tool that packs your entire repository into a...
Mastra
The TypeScript AI agent framework. ⚡ Assistants, RAG, observability....
Blender
BlenderMCP integrates Blender with Claude AI for enhanced 3D modeling.
cua
Open-source infrastructure for Computer-Use Agents. Sandboxes, SDKs, and...