Skip to main content

Quick Start

Get Open WebUI running on your machine. Pick your preferred method below.

Open WebUI works on macOS, Linux (x86_64 and ARM64, including Raspberry Pi and NVIDIA DGX Spark), and Windows.

  • Docker: Officially supported and recommended for most users. Requires Docker installed.
  • Python: Suitable for low-resource environments or manual setups
  • Kubernetes: Ideal for enterprise deployments requiring scaling and orchestration

Quick Start with Docker

info

WebSocket support is required. Ensure your network configuration allows WebSocket connections.

Docker Hub Now Available

Open WebUI images are published to both registries:

  • GitHub Container Registry: ghcr.io/open-webui/open-webui
  • Docker Hub: openwebui/open-webui

Both contain identical images. Replace ghcr.io/open-webui/open-webui with openwebui/open-webui in any command below.

1. Pull the image

docker pull ghcr.io/open-webui/open-webui:main

2. Run the container

docker run -d -p 3000:8080 -v open-webui:/app/backend/data --name open-webui ghcr.io/open-webui/open-webui:main
FlagPurpose
-v open-webui:/app/backend/dataPersistent storage. Prevents data loss between restarts.
-p 3000:8080Exposes the UI on port 3000 of your machine.

3. Open the UI

Visit http://localhost:3000.


Image Variants

TagUse case
:mainStandard image (recommended)
:main-slimSmaller image, downloads Whisper and embedding models on first use
:cudaNvidia GPU support (add --gpus all to docker run)
:ollamaBundles Ollama inside the container for an all-in-one setup

Specific release versions

For production environments, pin a specific version instead of using floating tags:

docker pull ghcr.io/open-webui/open-webui:v0.8.6
docker pull ghcr.io/open-webui/open-webui:v0.8.6-cuda
docker pull ghcr.io/open-webui/open-webui:v0.8.6-ollama

Common Configurations

GPU support (Nvidia)

docker run -d -p 3000:8080 --gpus all -v open-webui:/app/backend/data --name open-webui ghcr.io/open-webui/open-webui:cuda

Bundled with Ollama

A single container with Open WebUI and Ollama together:

With GPU:

docker run -d -p 3000:8080 --gpus=all -v ollama:/root/.ollama -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:ollama

CPU only:

docker run -d -p 3000:8080 -v ollama:/root/.ollama -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:ollama

Connecting to Ollama on a different server

docker run -d -p 3000:8080 -e OLLAMA_BASE_URL=https://example.com -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main

Single-user mode (no login)

docker run -d -p 3000:8080 -e WEBUI_AUTH=False -v open-webui:/app/backend/data --name open-webui ghcr.io/open-webui/open-webui:main
warning

You cannot switch between single-user mode and multi-account mode after this change.


Using the Dev Branch

tip

Testing dev builds is one of the most valuable ways to contribute. Run it on a test instance and report issues on GitHub.

The :dev tag contains the latest features before they reach a stable release.

docker run -d -p 3000:8080 -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:dev
warning

Never share your data volume between dev and production. Dev builds may include database migrations that are not backward-compatible. Always use a separate volume (e.g., -v open-webui-dev:/app/backend/data).

If Docker is not your preference, follow the Developing Open WebUI.


Uninstall

  1. Stop and remove the container:

    docker rm -f open-webui
  2. Remove the image (optional):

    docker rmi ghcr.io/open-webui/open-webui:main
  3. Remove the volume (optional, deletes all data):

    docker volume rm open-webui

Updating

To update your local Docker installation to the latest version, you can either use Watchtower or manually update the container.

Option 1: Using Watchtower

With Watchtower, you can automate the update process:

docker run --rm --volume /var/run/docker.sock:/var/run/docker.sock nickfedor/watchtower --run-once open-webui

(Replace open-webui with your container's name if it's different.)

Option 2: Manual Update

  1. Stop and remove the current container:

    docker rm -f open-webui
  2. Pull the latest version:

    docker pull ghcr.io/open-webui/open-webui:main
  3. Start the container again:

    docker run -d -p 3000:8080 -v open-webui:/app/backend/data \
      -e WEBUI_SECRET_KEY="your-secret-key" \
      --name open-webui --restart always \
      ghcr.io/open-webui/open-webui:main
Set WEBUI_SECRET_KEY

Without a persistent WEBUI_SECRET_KEY, you'll be logged out every time the container is recreated. Generate one with openssl rand -hex 32.

For version pinning, rollback, automated update tools, and backup procedures, see the full update guide.


After You Install

First Login
  • Admin account: The first account created gets Administrator privileges and controls user management and system settings.
  • New sign-ups: Subsequent registrations start with Pending status and require Administrator approval.
  • Privacy: All data, including login details, is stored locally on your device by default. Open WebUI does not make external requests by default. All models are private by default and must be explicitly shared.

Connect a Model Provider

Open WebUI needs at least one model provider to start chatting. Choose yours:

ProviderGuide
Ollama (local models)Starting with Ollama →
OpenAIStarting with OpenAI →
Any OpenAI-compatible APIOpenAI-Compatible Providers →
AnthropicStarting with Anthropic →
llama.cppStarting with llama.cpp →
vLLMStarting with vLLM →

Connect an Agent

Want more than a model? AI agents can execute terminal commands, read and write files, search the web, maintain memory, and chain complex workflows — all through Open WebUI's familiar chat interface.

AgentDescriptionGuide
Hermes AgentAutonomous agent by Nous Research with terminal, file ops, web search, memory, and extensible skillsSet up Hermes Agent →
OpenClawOpen-source self-hosted agent with shell access, file operations, web browsing, and messaging integrationsSet up OpenClaw →

Learn more about how agents differ from providers in the Connect an Agent overview →

Explore Features

Once connected, explore what Open WebUI can do: Features Overview →

Experimental: Open Responses

Open WebUI has experimental support for the Open Responses specification. See the Starting with Open Responses Guide to learn more.


Community

This content is for informational purposes only and does not constitute a warranty, guarantee, or contractual commitment. Open WebUI is provided "as is." See your license for applicable terms.