lemonade

lemonade-sdk
1835
Lemonade helps users run local LLMs with the highest performance by configuring state-of-the-art inference engines for their NPUs and GPUs. Join our discord: https://discord.gg/5xXzkMu8Zk
#amd #llama #llm #llm-inference #llms #local-server #mistral #npu #onnxruntime #qwen #openai-api #mcp #mcp-server #gpu #radeon #ryzen #vulkan #ai #genai

Content