⏱️ Quick Start
How to Install ⏱️
- Admin Creation: The first account created on Open WebUI gains Administrator privileges, controlling user management and system settings.
- User Registrations: Subsequent sign-ups start with Pending status, requiring Administrator approval for access.
- Privacy and Data Security: All your data, including login details, is locally stored on your device. Open WebUI ensures strict confidentiality and no external requests for enhanced privacy and security.
Choose your preferred installation method below:
- Docker: Recommended for most users due to ease of setup and flexibility.
- Kubernetes: Ideal for enterprise deployments that require scaling and orchestration.
- Python: Suitable for low-resource environments or those wanting a manual setup.
- Docker
- Kubernetes
- Python
- Third Party
- Docker Compose
- Podman
- Manual Docker
- Docker Swarm
Docker Compose Setup
Using Docker Compose simplifies the management of multi-container Docker applications.
If you don't have Docker installed, check out our Docker installation tutorial.
Docker Compose requires an additional package, docker-compose-v2
.
Warning: Older Docker Compose tutorials may reference version 1 syntax, which uses commands like docker-compose build
. Ensure you use version 2 syntax, which uses commands like docker compose build
(note the space instead of a hyphen).
Example docker-compose.yml
Here is an example configuration file for setting up Open WebUI with Docker Compose:
version: '3'
services:
openwebui:
image: ghcr.io/open-webui/open-webui:main
ports:
- "3000:8080"
volumes:
- open-webui:/app/backend/data
volumes:
open-webui:
Starting the Services
To start your services, run the following command:
docker compose up -d
Helper Script
A useful helper script called run-compose.sh
is included with the codebase. This script assists in choosing which Docker Compose files to include in your deployment, streamlining the setup process.
Note: For Nvidia GPU support, add the following to your service definition in the docker-compose.yml
file:
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: all
capabilities: [gpu]
This setup ensures that your application can leverage GPU resources when available.
Data Storage and Bind Mounts
This project uses Docker named volumes to persist data. If needed, replace the volume name with a host directory:
Example:
-v /path/to/folder:/app/backend/data
Ensure the host folder has the correct permissions.
Docker Compose Setup
Using Docker Compose simplifies the management of multi-container Docker applications.
Example docker-compose.yml
version: '3'
services:
openwebui:
image: ghcr.io/open-webui/open-webui:main
ports:
- "3000:8080"
volumes:
- open-webui:/app/backend/data
volumes:
open-webui:
Starting the Services
To start your services, run:
docker compose up -d
Note: For Nvidia GPU support, add the following to your service definition:
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: all
capabilities: [gpu]
Using Podman
Podman is a daemonless container engine for developing, managing, and running OCI Containers.
Basic Commands
-
Run a Container:
podman run -d --name openwebui -p 3000:8080 ghcr.io/open-webui/open-webui:main
-
List Running Containers:
podman ps
Networking with Podman
If networking issues arise, you may need to adjust your network settings:
--network=slirp4netns:allow_host_loopback=true
Refer to the Podman documentation for advanced configurations.
Manual Docker Setup
If you prefer to set up Docker manually, follow these steps.
Step 1: Pull the Open WebUI Image
docker pull ghcr.io/open-webui/open-webui:main
Step 2: Run the Container
docker run -d -p 3000:8080 -v open-webui:/app/backend/data --name open-webui ghcr.io/open-webui/open-webui:main
Note: For Nvidia GPU support, add --gpus all
to the docker run
command.
Access the WebUI
After the container is running, access Open WebUI at:
Data Storage and Bind Mounts
This project uses Docker named volumes to persist data. If needed, replace the volume name with a host directory:
Example:
-v /path/to/folder:/app/backend/data
Ensure the host folder has the correct permissions.
Docker Compose Setup
Using Docker Compose simplifies the management of multi-container Docker applications.
Example docker-compose.yml
version: '3'
services:
openwebui:
image: ghcr.io/open-webui/open-webui:main
ports:
- "3000:8080"
volumes:
- open-webui:/app/backend/data
volumes:
open-webui:
Starting the Services
To start your services, run:
docker compose up -d
Note: For Nvidia GPU support, add the following to your service definition:
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: all
capabilities: [gpu]
Docker Swarm
This installation method requires knowledge on Docker Swarms, as it utilizes a stack file to deploy 3 seperate containers as services in a Docker Swarm.
It includes isolated containers of ChromaDB, Ollama, and OpenWebUI. Additionally, there are pre-filled Environment Variables to further illustrate the setup.
Choose the appropriate command based on your hardware setup:
-
Before Starting:
Directories for your volumes need to be created on the host, or you can specify a custom location or volume.
The current example utilizes an isolated dir
data
, which is within the same dir as thedocker-stack.yaml
.-
For example:
mkdir -p data/open-webui data/chromadb data/ollama
-
-
With GPU Support:
Docker-stack.yaml
version: '3.9'
services:
openWebUI:
image: ghcr.io/open-webui/open-webui:main
depends_on:
- chromadb
- ollama
volumes:
- ./data/open-webui:/app/backend/data
environment:
DATA_DIR: /app/backend/data
OLLAMA_BASE_URLS: http://ollama:11434
CHROMA_HTTP_PORT: 8000
CHROMA_HTTP_HOST: chromadb
CHROMA_TENANT: default_tenant
VECTOR_DB: chroma
WEBUI_NAME: Awesome ChatBot
CORS_ALLOW_ORIGIN: "*" # This is the current Default, will need to change before going live
RAG_EMBEDDING_ENGINE: ollama
RAG_EMBEDDING_MODEL: nomic-embed-text-v1.5
RAG_EMBEDDING_MODEL_TRUST_REMOTE_CODE: "True"
ports:
- target: 8080
published: 8080
mode: overlay
deploy:
replicas: 1
restart_policy:
condition: any
delay: 5s
max_attempts: 3
chromadb:
hostname: chromadb
image: chromadb/chroma:0.5.15
volumes:
- ./data/chromadb:/chroma/chroma
environment:
- IS_PERSISTENT=TRUE
- ALLOW_RESET=TRUE
- PERSIST_DIRECTORY=/chroma/chroma
ports:
- target: 8000
published: 8000
mode: overlay
deploy:
replicas: 1
restart_policy:
condition: any
delay: 5s
max_attempts: 3
healthcheck:
test: ["CMD-SHELL", "curl localhost:8000/api/v1/heartbeat || exit 1"]
interval: 10s
retries: 2
start_period: 5s
timeout: 10s
ollama:
image: ollama/ollama:latest
hostname: ollama
ports:
- target: 11434
published: 11434
mode: overlay
deploy:
resources:
reservations:
generic_resources:
- discrete_resource_spec:
kind: "NVIDIA-GPU"
value: 0
replicas: 1
restart_policy:
condition: any
delay: 5s
max_attempts: 3
volumes:
- ./data/ollama:/root/.ollama-
Additional Requirements:
- Ensure CUDA is Enabled, follow your OS and GPU instructions for that.
- Enable Docker GPU support, see Nvidia Container Toolkit
- Follow the Guide here on configuring Docker Swarm to with with your GPU
- Ensure GPU Resource is enabled in
/etc/nvidia-container-runtime/config.toml
and enable GPU resource advertising by uncommenting theswarm-resource = "DOCKER_RESOURCE_GPU"
. The docker daemon must be restarted after updating these files on each node.
-
-
With CPU Support:
Modify the Ollama Service within
docker-stack.yaml
and remove the lines forgeneric_resources:
ollama:
image: ollama/ollama:latest
hostname: ollama
ports:
- target: 11434
published: 11434
mode: overlay
deploy:
replicas: 1
restart_policy:
condition: any
delay: 5s
max_attempts: 3
volumes:
- ./data/ollama:/root/.ollama -
Deploy Docker Stack:
docker stack deploy -c docker-stack.yaml -d super-awesome-ai
- Helm
- Kustomize
Helm Setup for Kubernetes
Helm helps you manage Kubernetes applications.
Prerequisites
- Kubernetes cluster is set up.
- Helm is installed.
Steps
-
Add Open WebUI Helm Repository:
helm repo add open-webui https://open-webui.github.io/helm-charts
helm repo update -
Install Open WebUI Chart:
helm install openwebui open-webui/open-webui
-
Verify Installation:
kubectl get pods
Access the WebUI
Set up port forwarding or load balancing to access Open WebUI from outside the cluster.
Kustomize Setup for Kubernetes
Kustomize allows you to customize Kubernetes YAML configurations.
Prerequisites
- Kubernetes cluster is set up.
- Kustomize is installed.
Steps
-
Clone the Open WebUI Manifests:
git clone https://github.com/open-webui/k8s-manifests.git
cd k8s-manifests -
Apply the Manifests:
kubectl apply -k .
-
Verify Installation:
kubectl get pods
Access the WebUI
Set up port forwarding or load balancing to access Open WebUI from outside the cluster.
- Venv
- Conda
- Development
Using Virtual Environments
Create isolated Python environments using venv
.
Steps
-
Create a Virtual Environment:
python3 -m venv venv
-
Activate the Virtual Environment:
-
On Linux/macOS:
source venv/bin/activate
-
On Windows:
venv\Scripts\activate
-
-
Install Open WebUI:
pip install open-webui
-
Start the Server:
open-webui serve
Choose Your Platform
- Linux/macOS
- Windows
Install with Conda
-
Create a Conda Environment:
conda create -n open-webui python=3.11
-
Activate the Environment:
conda activate open-webui
-
Install Open WebUI:
pip install open-webui
-
Start the Server:
open-webui serve
Install with Conda
-
Create a Conda Environment:
conda create -n open-webui python=3.11
-
Activate the Environment:
conda activate open-webui
-
Install Open WebUI:
pip install open-webui
-
Start the Server:
open-webui serve
Development Setup
For developers who want to contribute, check the Development Guide in Advanced Topics.
Next Steps
After installing, visit:
- http://localhost:3000 to access OpenWebUI.
- or http://localhost:8080/ when using a Python deployment.
You are now ready to start Using OpenWebUI!
Join the Community
Need help? Have questions? Join our community:
Stay updated with the latest features, troubleshooting tips, and announcements!
Conclusion
Thank you for choosing Open WebUI! We are committed to providing a powerful, privacy-focused interface for your LLM needs. If you encounter any issues, refer to the Troubleshooting Guide.
Happy exploring! 🎉