// DEPLOYMENT

Own your infrastructure.

LingLang is designed to be run on bare metal, VPS, or your local machine. We provide official Docker images and Helm charts.

Prerequisites

  • Docker Engine 20.10+
  • 2GB RAM (Min), 4GB (Rec)
  • OpenAI API Key (or local LLM endpoint)

Local LLM Support

LingLang supports Ollama and LocalAI out of the box. Point the configuration to your local inference server for a 100% offline experience.

# 1. Pull the image
> docker pull linglang/engine:latest

# 2. Create configuration
> mkdir config && touch config/settings.yaml

# 3. Run the container
> docker run -d \
--name linglang \
-p 3000:3000 \
-v $(pwd)/config:/app/config \
linglang/engine:latest

> Container started: 8f2a1c9d

Warning: Alpha Release

The self-hosted version is currently in active development. Breaking changes may occur in the API schema.