π Getting Started
How to Install πβ
Before You Begin
Installing Dockerβ
For Windows and Mac Users:β
- Download Docker Desktop from Docker's official website.
- Follow the installation instructions provided on the website. After installation, open Docker Desktop to ensure it's running properly.
For Ubuntu Users:β
-
Open your terminal.
-
Set up Docker's apt repository:
- Update your package index:
sudo apt-get update
- Install packages to allow apt to use a repository over HTTPS:
sudo apt-get install ca-certificates curl
- Create a directory for the Docker apt keyring:
sudo install -m 0755 -d /etc/apt/keyrings
- Add Docker's official GPG key:
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc - Add the Docker repository to Apt sources:
Note: If you're using an Ubuntu derivative distro, such as Linux Mint, you might need to use
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \
$(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/nullUBUNTU_CODENAME
instead ofVERSION_CODENAME
.
- Update your package index:
-
Install Docker Engine:
- Update your package index again:
sudo apt-get update
- Install Docker Engine, CLI, and containerd:
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-compose-plugin
- Update your package index again:
-
Verify the Docker installation:
- Use the following command to run a test image:
This command downloads a test image and runs it in a container. If successful, it prints an informational message confirming that Docker is installed and working correctly.
sudo docker run hello-world
- Use the following command to run a test image:
Other Linux Distributions:β
- For other Linux distributions, please refer to the official Docker documentation for installation instructions specific to your distro.
Ensure You Have the Latest Version of Ollama:β
- Download the latest version from https://ollama.com/.
Verify Ollama Installation:β
- After installing Ollama, verify its functionality by accessing http://127.0.0.1:11434/ in your web browser. Note that the port number might be different based on your installation.
-
Admin Creation: The very first account to sign up on Open WebUI will be granted Administrator privileges. This account will have comprehensive control over the platform, including user management and system settings.
-
User Registrations: All subsequent users signing up will initially have their accounts set to Pending status by default. These accounts will require approval from the Administrator to gain access to the platform functionalities.
-
Privacy and Data Security: We prioritize your privacy and data security above all. Please be reassured that all data entered into Open WebUI is stored locally on your device. Our system is designed to be privacy-first, ensuring that no external requests are made, and your data does not leave your local environment. We are committed to maintaining the highest standards of data privacy and security, ensuring that your information remains confidential and under your control.
Quick Start with Docker π³β
When using Docker to install Open WebUI, make sure to include the -v open-webui:/app/backend/data
in your Docker command. This step is crucial as it ensures your database is properly mounted and prevents any loss of data.
Installation with Default Configurationβ
-
If Ollama is on your computer, use this command:
docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
-
If Ollama is on a Different Server, use this command:
To connect to Ollama on another server, change the
OLLAMA_BASE_URL
to the server's URL:docker run -d -p 3000:8080 -e OLLAMA_BASE_URL=https://example.com -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
- To run Open WebUI with Nvidia GPU support, use this command:
docker run -d -p 3000:8080 --gpus all --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:cuda
Installation for OpenAI API Usage Onlyβ
-
If you're only using OpenAI API, use this command:
docker run -d -p 3000:8080 -e OPENAI_API_KEY=your_secret_key -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
Installing Open WebUI with Bundled Ollama Supportβ
This installation method uses a single container image that bundles Open WebUI with Ollama, allowing for a streamlined setup via a single command. Choose the appropriate command based on your hardware setup:
-
With GPU Support: Utilize GPU resources by running the following command:
docker run -d -p 3000:8080 --gpus=all -v ollama:/root/.ollama -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:ollama
-
For CPU Only: If you're not using a GPU, use this command instead:
docker run -d -p 3000:8080 -v ollama:/root/.ollama -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:ollama
Both commands facilitate a built-in, hassle-free installation of both Open WebUI and Ollama, ensuring that you can get everything up and running swiftly.
After installation, you can access Open WebUI at http://localhost:3000. Enjoy! π
Open WebUI: Server Connection Errorβ
Encountering connection issues between the Open WebUI Docker container and the Ollama server? This problem often arises because distro-packaged versions of Dockerβlike those from the Ubuntu repositoryβdo not support the host.docker.internal
alias for reaching the host directly. Inside a container, referring to localhost
or 127.0.0.1
typically points back to the container itself, not the host machine.
To address this, we recommend using the --network=host
flag in your Docker command. This flag allows the container to use the host's networking stack, effectively making localhost
or 127.0.0.1
in the container refer to the host machine. As a result, the WebUI can successfully connect to the Ollama server at 127.0.0.1:11434
. Please note, with --network=host
, the container's port configuration aligns directly with the host, changing the access link to http://localhost:8080
.
Here's how you can modify your Docker command:
docker run -d --network=host -v open-webui:/app/backend/data -e OLLAMA_BASE_URL=http://127.0.0.1:11434 --name open-webui --restart always ghcr.io/open-webui/open-webui:main
For more details on networking in Docker and addressing common connectivity issues, visit our FAQ page. This page provides additional context and solutions for frequently encountered problems, ensuring a smoother operation of Open WebUI in various environments.
Docker Composeβ
Using Docker Composeβ
-
If you don't have Ollama yet, use Docker Compose for easy installation. Run this command:
docker compose up -d --build
-
For Nvidia GPU Support: Use an additional Docker Compose file:
docker compose -f docker-compose.yaml -f docker-compose.gpu.yaml up -d --build
-
For AMD GPU Support: Some AMD GPUs require setting an environment variable for proper functionality:
HSA_OVERRIDE_GFX_VERSION=11.0.0 docker compose -f docker-compose.yaml -f docker-compose.amdgpu.yaml up -d --build
AMD GPU Support with HSA_OVERRIDE_GFX_VERSION
For AMD GPU users encountering compatibility issues, setting the
HSA_OVERRIDE_GFX_VERSION
environment variable is crucial. This variable instructs the ROCm platform to emulate a specific GPU architecture, ensuring compatibility with various AMD GPUs not officially supported. Depending on your GPU model, adjust theHSA_OVERRIDE_GFX_VERSION
as follows:- For RDNA1 & RDNA2 GPUs (e.g., RX 6700, RX 680M): Use
HSA_OVERRIDE_GFX_VERSION=10.3.0
. - For RDNA3 GPUs: Set
HSA_OVERRIDE_GFX_VERSION=11.0.0
. - For older GCN (Graphics Core Next) GPUs: The version to use varies. GCN 4th gen and earlier might require different settings, such as
ROC_ENABLE_PRE_VEGA=1
for GCN4, orHSA_OVERRIDE_GFX_VERSION=9.0.0
for Vega (GCN5.0) emulation.
Ensure to replace
<version>
with the appropriate version number based on your GPU model and the guidelines above. For a detailed list of compatible versions and more in-depth instructions, refer to the ROCm documentation and the openSUSE Wiki on AMD GPGPU.Example command for RDNA1 & RDNA2 GPUs:
HSA_OVERRIDE_GFX_VERSION=10.3.0 docker compose -f docker-compose.yaml -f docker-compose.amdgpu.yaml up -d --build
- For RDNA1 & RDNA2 GPUs (e.g., RX 6700, RX 680M): Use
-
To Expose Ollama API: Use another Docker Compose file:
docker compose -f docker-compose.yaml -f docker-compose.api.yaml up -d --build
Using run-compose.sh
Script (Linux or Docker-Enabled WSL2 on Windows)β
-
Give execute permission to the script:
chmod +x run-compose.sh
-
For CPU-only container:
./run-compose.sh
-
For GPU support (read the note about GPU compatibility):
./run-compose.sh --enable-gpu
-
To build the latest local version, add
--build
:./run-compose.sh --enable-gpu --build
Installing with Podmanβ
Rootless (Podman) local-only Open WebUI with Systemd service and auto-update
Consult the Docker documentation because much of the configuration and syntax is interchangeable with Podman. See also rootless_tutorial. This example requires the slirp4netns network backend to facilitate server listen and Ollama communication over localhost only.
Rootless container execution with Podman (and Docker/ContainerD) does not support AppArmor confinment. This may increase the attack vector due to requirement of user namespace. Caution should be exercised and judement (in contrast to the root daemon) rendered based on threat model.
-
Pull the latest image:
podman pull ghcr.io/open-webui/open-webui:main
-
Create a new container using desired configuration:
note-p 127.0.0.1:3000:8080
ensures that we listen only on localhost,--network slirp4netns:allow_host_loopback=true
permits the container to access Ollama when it also listens strictly on localhost.--add-host=ollama.local:10.0.2.2 --env 'OLLAMA_BASE_URL=http://ollama.local:11434'
adds a hosts record to the container and configures open-webui to use the friendly hostname.10.0.2.2
is the default slirp4netns address used for localhost mapping.--env 'ANONYMIZED_TELEMETRY=False'
isn't necessary since Chroma telemetry has been disabled in the code but is included as an example.podman create -p 127.0.0.1:3000:8080 --network slirp4netns:allow_host_loopback=true --add-host=ollama.local:10.0.2.2 --env 'OLLAMA_BASE_URL=http://ollama.local:11434' --env 'ANONYMIZED_TELEMETRY=False' -v open-webui:/app/backend/data --label io.containers.autoupdate=registry --name open-webui ghcr.io/open-webui/open-webui:main
notePodman 5.0 has updated the default rootless network backend to use the more performant pasta. While
slirp4netns:allow_host_loopback=true
still achieves the same local-only intention, it's now recommended use a simple TCP forward instead like:--network=pasta:-T,11434 --add-host=ollama.local:127.0.0.1
. Full example:podman create -p 127.0.0.1:3000:8080 --network=pasta:-T,11434 --add-host=ollama.local:127.0.0.1 --env 'OLLAMA_BASE_URL=http://ollama.local:11434' --env 'ANONYMIZED_TELEMETRY=False' -v open-webui:/app/backend/data --label io.containers.autoupdate=registry --name open-webui ghcr.io/open-webui/open-webui:main
-
Prepare for systemd user service:
mkdir -p ~/.config/systemd/user/
-
Generate user service with Podman:
podman generate systemd --new open-webui > ~/.config/systemd/user/open-webui.service
-
Reload systemd configuration:
systemctl --user daemon-reload
-
Enable and validate new service:
systemctl --user enable open-webui.service
systemctl --user start open-webui.service
systemctl --user status open-webui.service -
Enable and validate Podman auto-update:
systemctl --user enable podman-auto-update.timer
systemctl --user enable podman-auto-update.service
systemctl --user status podman-auto-update.timerDry run with the following command (omit
--dry-run
to force an update):podman auto-update --dry-run
This process is compatible with Windows 11 WSL deployments when using Ollama within the WSL environment or using the Ollama Windows Preview. When using the native Ollama Windows Preview version, one additional step is required: enable mirrored networking mode.
Enabling Windows 11 mirrored networkingβ
- Populate
%UserProfile%\.wslconfig
with:[wsl2]
networkingMode=mirrored - Restart WSL:
wsl --shutdown
Alternative Installation Methodsβ
For other ways to install, like using Kustomize or Helm, check out INSTALLATION. Join our Open WebUI Discord community for more help and information.
Updating your Docker Installationβ
For detailed instructions on manually updating your local Docker installation of Open WebUI, including steps for those not using Watchtower and updates via Docker Compose, please refer to our dedicated guide: UPDATING.
For a quick update with Watchtower, use the command below. Remember to replace open-webui
with your actual container name if it differs.
docker run --rm --volume /var/run/docker.sock:/var/run/docker.sock containrrr/watchtower --run-once open-webui
In the last part of the command, replace open-webui
with your container name if it is different.
After updating Open WebUI, you might need to refresh your browser cache to see the changes.
How to Install Without Dockerβ
While we strongly recommend using our convenient Docker container installation for optimal support, we understand that some situations may require a non-Docker setup, especially for development purposes. Please note that non-Docker installations are not officially supported, and you might need to troubleshoot on your own.
Project Componentsβ
Open WebUI consists of two primary components: the frontend and the backend (which serves as a reverse proxy, handling static frontend files, and additional features). Both need to be running concurrently for the development environment.
The backend is required for proper functionality
Requirements π¦β
Build and Install π οΈβ
Run the following commands to install:
git clone https://github.com/open-webui/open-webui.git
cd open-webui/
# Copying required .env file
cp -RPp .env.example .env
# Building Frontend Using Node
npm i
npm run build
# Serving Frontend with the Backend
cd ./backend
pip install -r requirements.txt -U
bash start.sh
You should have Open WebUI up and running at http://localhost:8080/. Enjoy! π