Content
# Politician Trading Data Assistant
A full-stack chat application that connects a static frontend (GitHub Pages) to a Node.js + Express backend, which integrates with the Hugging Face LLM API and the [MCP Capitol Trades](https://www.npmjs.com/package/@anguslin/mcp-capitol-trades) server. The system processes user messages through a conversational UI, maintains conversation history, automatically selects MCP tools via the protocol, and returns a summarized response generated by the LLM.
## Project Structure
```
politician-trading-data-assistant/
├── backend/ # Node.js + Express backend server
│ ├── config/ # Configuration and constants
│ ├── middleware/ # Express middleware (auth, CORS, rate limiting)
│ ├── routes/ # API routes
│ ├── services/ # Business logic (LLM, MCP, history, prompts)
│ ├── scripts/ # Test scripts
│ ├── server.js # Main server entry point
│ └── package.json # Backend dependencies
│
└── ui/ # Static frontend for GitHub Pages
├── index.html # Main HTML file
├── styles.css # Styling
├── app.js # Frontend JavaScript
└── README.md # Frontend setup instructions
```
## Quick Start
### Backend Setup
1. Navigate to the backend directory:
```bash
cd backend
```
2. Install dependencies:
```bash
npm install
```
3. Create a `.env` file based on `.env.example`:
```bash
cp .env.example .env
```
4. Configure your environment variables in `.env`:
- `HF_API_KEY`: Your Hugging Face API token (required)
- `API_KEY`: Custom API key for authentication (required)
- `HF_MODEL_TYPE`: Hugging Face model identifier to use (required, e.g. `mistralai/Mistral-7B-Instruct-v0.3`)
- `PORT`: Server port (optional, defaults to 3000)
- `GITHUB_PAGES_DOMAIN`: (optional) Origin allowed by CORS
- `TRUST_PROXY`: (optional) Express proxy trust setting; use `1` on Render
5. Run the server:
```bash
npm run dev:local
```
See [backend/README.md](./backend/README.md) for detailed backend documentation.
### Frontend Setup
1. Navigate to the ui directory:
```bash
cd ui
```
2. Update `app.js` with your API key (the base URL is auto-detected):
```javascript
const API_KEY = 'your-api-key-here'; // Must match your backend API_KEY
```
**Auto detection:** `API_BASE_URL` automatically routes to `http://localhost:3000` when the UI is opened on `localhost`/`127.0.0.1`, and falls back to the production Render URL defined in `app.js`. Update that production URL if you deploy elsewhere.
3. Deploy to GitHub Pages:
- Push the `ui/` folder to your GitHub repository
- Go to Settings > Pages
- Set source to the branch containing `ui/`
- Set root directory to `/ui`
See [ui/README.md](./ui/README.md) for detailed frontend documentation.
## Features
- **Backend**:
- Express server with RESTful API
- Security: CORS, API key validation, rate limiting
- LLM Integration: Hugging Face Inference API via `@huggingface/inference` SDK
- MCP Integration: [MCP Capitol Trades](https://www.npmjs.com/package/@anguslin/mcp-capitol-trades) data access via Model Context Protocol
- Two-stage LLM workflow: MCP tool selection followed by data analysis
- Conversation history management per user session
- **Frontend**:
- Modern, responsive chat interface
- Real-time communication with backend API
- Displays LLM responses and MCP data with markdown formatting
- Error handling and loading states with animated progress bar (up to 45 seconds)
- "How it works" button with architecture modal explaining the system
- Auto-detection of local vs production backend URLs
## Running Locally (Full Stack)
To run both the backend and frontend locally for development:
### Step 1: Start the Backend
Open a terminal and run:
```bash
cd backend
npm install # If you haven't already
npm run dev:local # Starts backend on http://localhost:3000
```
The backend will be available at `http://localhost:3000`. Keep this terminal running.
### Step 2: Start the Frontend
Open a **new terminal** and run:
```bash
cd ui
# Choose one of these methods:
# Option 1: Python (if installed)
python -m http.server 8080
# Option 2: Node.js http-server
npx http-server -p 8080
# Option 3: PHP (if installed)
php -S localhost:8080
```
### Step 3: Configure the Frontend
The UI automatically detects local development and will use `http://localhost:3000` as the backend URL when running on `localhost`.
**Important:** Make sure to update the `API_KEY` in `ui/app.js` to match your backend's `API_KEY` from `backend/.env`:
```javascript
const API_KEY = 'your-api-key-here'; // Must match backend/.env API_KEY
```
### Step 4: Open in Browser
Open your browser and navigate to:
- **Frontend UI**: http://localhost:8080
- **Backend API**: http://localhost:3000/health (to verify it's running)
### How It Works
The `ui/app.js` automatically detects if you're running locally:
- **Local development** (localhost): Uses `http://localhost:3000`
- **Production** (deployed): Uses your deployed backend URL
You can verify which URL is being used by checking the browser console.
## Development
### Backend Development
```bash
cd backend
npm run dev:local # Development with auto-reload
npm test # Run tests
npm run test:llm # Test LLM service
```
### Frontend Development
The frontend automatically connects to the local backend when running on `localhost`. Just make sure:
1. Your backend is running on port 3000 (or update the local URL inside `ui/app.js`)
2. Your `API_KEY` in `ui/app.js` matches your backend's `API_KEY`
## Deployment
### Backend (Render)
1. Create a new Web Service on Render
2. Connect your GitHub repository
3. Set root directory to `backend`
4. Configure environment variables (see backend README)
5. Deploy
### Frontend (GitHub Pages)
1. Push `ui/` folder to GitHub
2. Configure GitHub Pages to serve from `ui/` directory
3. Ensure the production backend URL inside `ui/app.js` matches your deployment
## API Documentation
### POST `/api/chat`
Processes user messages and returns combined LLM and MCP responses.
**Headers:**
- `x-api-key`: Your API key (required)
- `x-user-id`: User identifier for conversation history (required)
**Request Body:**
```json
{
"message": "What are the top traded assets by politicians?"
}
```
**Response:**
```json
{
"reply": "Summary of the MCP data with markdown formatting...",
"mcpDataUsed": true
}
```
### GET `/health`
Health check endpoint.
**Response:**
```json
{
"status": "ok"
}
```
## Security
- CORS is restricted to configured GitHub Pages domains
- API key validation via `x-api-key` header
- Rate limiting: 100 requests per 15 minutes per IP address
- Conversation history is stored locally per user (via `x-user-id` header)
- User IDs are persisted in browser localStorage for session continuity
## Technology Stack
- **Frontend**: Vanilla JavaScript, HTML5, CSS3 (deployed on GitHub Pages)
- **Backend**: Node.js, Express.js
- **LLM**: Hugging Face Inference API
- **MCP**: [@anguslin/mcp-capitol-trades](https://www.npmjs.com/package/@anguslin/mcp-capitol-trades) - Model Context Protocol server for politician trading data
## License
MIT
Connection Info
You Might Also Like
n8n
n8n is a workflow automation platform for technical teams, combining code...
ollama
Get up and running with OpenAI gpt-oss, DeepSeek-R1, Gemma 3 and other models.
dify
Dify is a platform for AI workflows, enabling file uploads and self-hosting.
open-webui
Open WebUI is an extensible web interface for various applications.
NextChat
NextChat is a light and fast AI assistant supporting Claude, DeepSeek, GPT4...
zed
Zed is a high-performance multiplayer code editor from the creators of Atom.
Cline
Cline is a versatile tool available on VS Marketplace for enhancing...
anything-llm
AnythingLLM: An all-in-one AI app for chatting with documents and using AI agents.
cherry-studio
🍒 Cherry Studio is a desktop client that supports for multiple LLM providers.
goose
Goose is an open-source AI agent that automates engineering tasks autonomously.