// DEPLOYMENT
LingLang is designed to be run on bare metal, VPS, or your local machine. We provide official Docker images and Helm charts.
LingLang supports Ollama and LocalAI out of the box. Point the configuration to your local inference server for a 100% offline experience.
The self-hosted version is currently in active development. Breaking changes may occur in the API schema.