Content
# AI Recipe Recommendation System
An AI recipe recommendation system based on the Flask microservices architecture, integrating RAG, large language models, and image generation models. It supports recommending recipes based on user input, generating recipe content, and style images.
## System Architecture
The system consists of the following microservices:
- api-gateway: API gateway, unified entry point
- user-service: User service, handling user-related functionalities
- core-service: Core service, handling recommendation logic
- ai-service: AI service, handling text and image generation
- retrieval-service: Retrieval service, handling recipe searches
## Tech Stack
- Web Framework: Flask
- Database: PostgreSQL
- Cache: Redis
- Service Registration and Discovery: Nacos
- Container Management: Docker (docker-compose)
- AI Models:
- Text Generation: deepseek-llm-7b-instruct
- Image Generation: stable-diffusion-v1-5
- Text Embedding: all-MiniLM-L6-v2
## Deployment Requirements
- Docker and Docker Compose
- NVIDIA GPU (16GB VRAM or above)
- At least 16GB of RAM
- At least 100GB of storage space
## Quick Start
1. Clone the repository:
```bash
git clone [repository-url]
cd small_recipe_recommendation
```
2. Start the services:
```bash
docker-compose up -d
```
3. Access the services:
- API Gateway: http://localhost:8000
- User Service: http://localhost:8001
- Core Service: http://localhost:8002
- AI Service: http://localhost:8003
- Retrieval Service: http://localhost:8004
## API Documentation
### User Service
- POST /register - User registration
- POST /login - User login
- GET /preferences - Get user preferences
- PUT /preferences - Update user preferences
- POST /feedback - Submit feedback
- GET /history - Get history records
### Core Service
- POST /recommend - Get recipe recommendations
### AI Service
- POST /generate/description - Generate recipe description
- POST /generate/image - Generate recipe image
### Retrieval Service
- POST /search - Search for recipes
- POST /index - Index new recipes
## Development Guide
1. Environment Setup:
```bash
python -m venv venv
source venv/bin/activate # Linux/Mac
# or
.\venv\Scripts\activate # Windows
pip install -r requirements.txt
```
2. Local Development:
```bash
# Start individual services
python api-gateway/app.py
python user-service/app.py
python core-service/app.py
python ai-service/app.py
python retrieval-service/app.py
```
3. Testing:
```bash
# Run tests
pytest
```
## Notes
1. On the first startup, the AI service needs to download models, which may take a considerable amount of time.
2. Ensure that the NVIDIA drivers and CUDA are installed correctly.
3. When deploying in a production environment, please change the default passwords and keys.
4. It is recommended to use HTTPS for production environment deployment.
## Contribution Guidelines
1. Fork the project
2. Create a feature branch
3. Commit your changes
4. Push to the branch
5. Create a Pull Request
## License
MIT License
You Might Also Like
OpenWebUI
Open WebUI is an extensible web interface for customizable applications.

NextChat
NextChat is a light and fast AI assistant supporting Claude, DeepSeek, GPT4...

Continue
Continue is an open-source project for seamless server management.
semantic-kernel
Build and deploy intelligent AI agents with the Semantic Kernel framework.

repomix
Repomix packages your codebase into AI-friendly formats for easy use.
UI-TARS-desktop
UI-TARS-desktop is part of the TARS Multimodal AI Agent stack.