Server Connectivity Issues
We're here to help you get everything set up and running smoothly. Below, you'll find step-by-step instructions tailored for different scenarios to solve common connection issues.
π HTTPS, TLS, CORS & WebSocket Issuesβ
If you're experiencing connectivity problems with Open WebUI, especially when using reverse proxies or HTTPS, these issues often stem from improper CORS, TLS, WebSocket, or cookie configuration. Here's how to diagnose and fix them.
Common Symptomsβ
You might be experiencing these issues if you see:
- Empty responses like
"{}"in the chat - Errors like
"Unexpected token 'd', "data: {"id"... is not valid JSON" - Garbled markdown (visible
##,**, broken formatting) during streamingβsee Streaming Response Corruption - WebSocket connection failures in browser console
- WebSocket connection failures in CLI logs
- Login problems or session issues
- CORS errors in browser developer tools
- Mixed content warnings when accessing over HTTPS
Required Configuration for HTTPS & Reverse Proxiesβ
Critical Environment Variables
When running Open WebUI behind a reverse proxy with HTTPS, you must configure these settings:
# Set this to your actual domain BEFORE FIRST STARTUP (required for OAuth/SSO and proper operation)
WEBUI_URL=https://your-open-webui-domain.com
# If you already started Open WebUI, don't worry, you can set this config from the admin panel as well!
# CORS configuration - CRITICAL for WebSocket functionality
# Include ALL ways users might access your instance
# Make sure to include all IPs, hostnames and domains users can and could access Open WebUI and how requests are going to your Open WebUI instance
# e.g. localhost, 127.0.0.1, 0.0.0.0, <ip of your server/computer>, public domain - all in http and https with the correct ports
CORS_ALLOW_ORIGIN="https://yourdomain.com;http://yourdomain.com;https://yourip;http://localhost:3000"
# Cookie security settings for HTTPS
# Disable if you do not use HTTPS
WEBUI_SESSION_COOKIE_SECURE=true
WEBUI_AUTH_COOKIE_SECURE=true
# For OAuth/SSO, you will probably have to use 'lax' (strict can break OAuth callbacks)
WEBUI_SESSION_COOKIE_SAME_SITE=lax
WEBUI_AUTH_COOKIE_SAME_SITE=lax
# WebSocket support (if using Redis)
# If you experience websocket related issues, even after configuring all of the above, you can try turning OFF ENABLE_WEBSOCKET_SUPPORT
# But this is not recommended for production and also not officially supported!
# If you experience websocket issues, you should ideally provide websocket support through reverse proxies.
ENABLE_WEBSOCKET_SUPPORT=true
WEBSOCKET_MANAGER=redis
WEBSOCKET_REDIS_URL=redis://redis:6379/1
WEBUI_URL Configuration
The WEBUI_URL must be set correctly BEFORE using OAuth/SSO. Since it's a persistent config variable, you can only change it by:
- Disabling persistent config temporarily with
ENABLE_PERSISTENT_CONFIG=false - Changing it in Admin Panel > Settings > WebUI URL
- Setting it correctly before first launch
CORS Configuration Details
The CORS_ALLOW_ORIGIN setting is crucial for WebSocket functionality. If you see errors in the logs like "https://yourdomain.com is not an accepted origin" or "http://127.0.0.1:3000 is not an accepted origin", you need to add that URL to your CORS configuration. Use semicolons to separate multiple origins, and include every possible way users access your instance (domain, IP, localhost).
Reverse Proxy / SSL/TLS Configurationβ
For reverse proxy and TLS setups, check our tutorials here.
WebSocket Troubleshootingβ
WebSocket support is required for Open WebUI v0.5.0 and later. If WebSockets aren't working:
- Check your reverse proxy configuration - Ensure
UpgradeandConnectionheaders are properly set - Verify CORS settings - WebSocket connections respect CORS policies
- Check browser console - Look for WebSocket connection errors
- Test direct connection - Try connecting directly to Open WebUI without the proxy to isolate the issue.
- Check for HTTP/2 WebSocket Issues - Some proxies (like HAProxy 3.x) enable HTTP/2 by default. If your proxy handles client connections via HTTP/2 but the backend/application doesn't support RFC 8441 (WebSockets over H2) properly, the instance may "freeze" or stop responding.
- Fix for HAProxy: Add
option h2-workaround-bogus-websocket-clientsto your configuration or force the backend connection to use HTTP/1.1. - Fix for Nginx: Ensure you are using
proxy_http_version 1.1;in your location block (which is the default in many Open WebUI examples).
- Fix for HAProxy: Add
For multi-instance deployments, configure Redis for WebSocket management:
ENABLE_WEBSOCKET_SUPPORT=true
WEBSOCKET_MANAGER=redis
WEBSOCKET_REDIS_URL=redis://redis:6379/1
Testing Your Configurationβ
To verify your setup is working:
- Check HTTPS: Visit your domain and ensure you see a valid certificate with no browser warnings
- Test WebSockets: Open browser developer tools, go to Network tab, filter by "WS", and verify WebSocket connections are established
- Verify CORS: Check browser console for any CORS-related errors
- Test functionality: Send a message and ensure streaming responses work properly
Quick Fixes Checklistβ
- β Set
WEBUI_URLto your actual HTTPS domain before enabling OAuth - β Configure
CORS_ALLOW_ORIGINwith all possible access URLs - β Enable
WEBUI_SESSION_COOKIE_SECURE=truefor HTTPS - β Add WebSocket headers to your reverse proxy configuration
- β Use TLSv1.2 or TLSv1.3 in your SSL configuration
- β Set proper
X-Forwarded-Protoheaders in your reverse proxy - β Ensure HTTP to HTTPS redirects are in place
- β Configure Let's Encrypt for automatic certificate renewal
- β Disable proxy buffering for SSE streaming (see below)
π Garbled Markdown / Streaming Response Corruptionβ
If streaming responses show garbled markdown rendering (e.g., visible ##, **, or broken formatting), but disabling streaming fixes the issue, this is typically caused by nginx proxy buffering.
Common Symptomsβ
- Raw markdown tokens visible in responses (
##,**,###) - Bold markers appearing incorrectly (
** Control:**instead of**Control:**) - Words or sections randomly missing from responses
- Formatting works correctly when streaming is disabled
Cause: Nginx Proxy Bufferingβ
When nginx's proxy buffering is enabled, it re-chunks the SSE (Server-Sent Events) stream arbitrarily. This breaks markdown tokens across chunk boundariesβfor example, **bold** becomes separate chunks ** + bold + **, causing the markdown parser to fail.
Solution: Disable Proxy Bufferingβ
Add these directives to your nginx location block for Open WebUI:
location / {
proxy_pass http://your-open-webui-upstream;
# CRITICAL: Disable buffering for SSE streaming
proxy_buffering off;
proxy_cache off;
# WebSocket support
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
# Standard proxy headers
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
Disabling proxy buffering also significantly improves streaming speed, as responses stream byte-by-byte directly to the client without nginx's buffering delay.
For Other Reverse Proxiesβ
- HAProxy: Ensure
option http-buffer-requestis not enabled for SSE endpoints - Traefik: Check compression/buffering middleware settings
- Caddy: Generally handles SSE correctly by default, but check for any buffering plugins
π Connection to Ollama Serverβ
π Accessing Ollama from Open WebUIβ
Struggling to connect to Ollama from Open WebUI? It could be because Ollama isnβt listening on a network interface that allows external connections. Letβs sort that out:
-
Configure Ollama to Listen Broadly π§: Set
OLLAMA_HOSTto0.0.0.0to make Ollama listen on all network interfaces. -
Update Environment Variables: Ensure that the
OLLAMA_HOSTis accurately set within your deployment environment. -
Restart Ollamaπ: A restart is needed for the changes to take effect.
π‘ After setting up, verify that Ollama is accessible by visiting the WebUI interface.
For more detailed instructions on configuring Ollama, please refer to the Ollama's Official Documentation.
π³ Docker Connection Errorβ
If you're seeing a connection error when trying to access Ollama, it might be because the WebUI docker container can't talk to the Ollama server running on your host. Letβs fix that:
-
Adjust the Network Settings π οΈ: Use the
--network=hostflag in your Docker command. This links your container directly to your hostβs network. -
Change the Port: Remember that the internal port changes from 3000 to 8080.
Example Docker Command:
docker run -d --network=host -v open-webui:/app/backend/data -e OLLAMA_BASE_URL=http://127.0.0.1:11434 --name open-webui --restart always ghcr.io/open-webui/open-webui:main
π After running the above, your WebUI should be available at http://localhost:8080.
β±οΈ Model List Loading Issues (Slow UI / Unreachable Endpoints)β
If your Open WebUI takes a long time to load models, or the model selector spins indefinitely, it may be due to an unreachable or slow API endpoint configured in your connections.
Common Symptomsβ
- Model selector shows a loading spinner for extended periods
500 Internal Server Erroron/api/modelsendpoint- UI becomes unresponsive when opening Settings
- Docker/server logs show:
Connection error: Cannot connect to host...
Cause: Unreachable Endpointsβ
When you configure multiple Ollama or OpenAI base URLs (for load balancing or redundancy), Open WebUI attempts to fetch models from all configured endpoints. If any endpoint is unreachable, the system waits for the full connection timeout before returning results.
By default, Open WebUI waits 10 seconds per unreachable endpoint when fetching the model list. With multiple bad endpoints, this delay compounds.
Solution 1: Adjust the Timeoutβ
Lower the timeout for model list fetching using the AIOHTTP_CLIENT_TIMEOUT_MODEL_LIST environment variable:
# Set a shorter timeout (in seconds) for faster failure on unreachable endpoints
AIOHTTP_CLIENT_TIMEOUT_MODEL_LIST=3
This reduces how long Open WebUI waits for each endpoint before giving up and continuing.
Solution 2: Fix or Remove Unreachable Endpointsβ
- Go to Admin Settings β Connections
- Review your Ollama and OpenAI base URLs
- Remove or correct any unreachable IP addresses or hostnames
- Save the configuration
Solution 3: Recover from Database-Persisted Bad Configurationβ
If you saved an unreachable URL and now can't access the Settings UI to fix it, the bad configuration is persisted in the database and takes precedence over environment variables. Use one of these recovery methods:
Option A: Reset configuration on startup
# Forces environment variables to override database values on next startup
RESET_CONFIG_ON_START=true
Option B: Always use environment variables
# Prevents database values from taking precedence (changes in UI won't persist across restarts)
ENABLE_PERSISTENT_CONFIG=false
Option C: Manual database cleanup (advanced)
If using SQLite, stop the container and run:
sqlite3 webui.db "DELETE FROM config WHERE id LIKE '%urls%';"
Manual database manipulation should be a last resort. Always back up your database first.
Related Environment Variablesβ
| Variable | Default | Description |
|---|---|---|
AIOHTTP_CLIENT_TIMEOUT_MODEL_LIST | 10 | Timeout (seconds) for fetching model lists |
AIOHTTP_CLIENT_TIMEOUT | 300 | General API request timeout |
RESET_CONFIG_ON_START | false | Reset database config to env var values on startup |
ENABLE_PERSISTENT_CONFIG | true | Whether database config takes precedence over env vars |
See the Environment Configuration documentation for more details.
π’ Slow Performance or Timeouts on Low-Spec Hardwareβ
If you're experiencing slow page loads, API timeouts, or unresponsive UIβespecially on resource-constrained systemsβthis may be related to database session sharing.
Causeβ
Database session sharing can overwhelm low-spec hardware (Raspberry Pi, containers with minimal CPU, etc.) or SQLite databases under concurrent load.
Solutionβ
Disable database session sharing:
DATABASE_ENABLE_SESSION_SHARING=false
For PostgreSQL on adequate hardware, enabling this setting may improve performance. See the DATABASE_ENABLE_SESSION_SHARING documentation for details.
π SSL Connection Issue with Hugging Faceβ
Encountered an SSL error? It could be an issue with the Hugging Face server. Here's what to do:
-
Check Hugging Face Server Status: Verify if there's a known outage or issue on their end.
-
Switch Endpoint: If Hugging Face is down, switch the endpoint in your Docker command.
Example Docker Command for Connected Issues:
docker run -d -p 3000:8080 -e HF_ENDPOINT=https://hf-mirror.com/ --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
π SSL Certificate Issues with Internal Toolsβ
If you are using external tools like Tika, Ollama (for embeddings), or an external reranker with self-signed certificates, you might encounter SSL verification errors.
Common Symptomsβ
- Logs show
[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self-signed certificate - Tika document ingestion fails
- Embedding generation fails with SSL errors
- Reranking fails with SSL errors
Solutionβ
You can disable SSL verification for these internal tool connections using the following environment variables:
-
For synchronous requests (Tika, External Reranker):
REQUESTS_VERIFY=false -
For asynchronous requests (Ollama Embeddings):
AIOHTTP_CLIENT_SESSION_SSL=false
Disabling SSL verification reduces security. Only do this if you trust the network and the services you are connecting to (e.g., functioning within a secure internal network).
π Podman on MacOSβ
Running on MacOS with Podman? Hereβs how to ensure connectivity:
-
Enable Host Access: Podman 5.0+ uses pasta by default, which simplifies host loopback. If you are on an older version, you may need
--network slirp4netns:allow_host_loopback=true. -
Set OLLAMA_BASE_URL: Ensure it points to
http://host.containers.internal:11434.
Example Podman Command:
podman run -d -p 3000:8080 -e OLLAMA_BASE_URL=http://host.containers.internal:11434 -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
π οΈ MCP Tool Connectionsβ
If you are having trouble connecting to MCP tools (e.g. "Failed to connect to MCP server"):
- Authentication: Ensure you aren't using "Bearer" without a token.
- Filters: Try adding a comma to the Function Name Filter List.
See the MCP Feature Documentation for detailed troubleshooting steps.
π SSL/TLS Errors with Web Searchβ
If you are encountering SSL errors while using the Web Search feature, they usually fall into two categories: Proxy configuration issues or Certificate verification issues.
Certificate Verification Issuesβ
If you are seeing SSL verification errors when Open WebUI tries to fetch content from websites (Web Loader):
- Symptom:
[SSL: CERTIFICATE_VERIFY_FAILED]when loading search results. - Solution: You can disable SSL verification for the Web Loader (scraper) specifically.
ENABLE_WEB_LOADER_SSL_VERIFICATION=falseNote: This setting applies to the fetching of web pages. If you are having SSL issues with the Search Engine itself (e.g., local SearXNG) or subsequent steps (Embedding/Reranking), see the sections below.
Proxy Configuration Issuesβ
If you're seeing SSL errors like UNEXPECTED_EOF_WHILE_READING or Max retries exceeded when using web search providers (Bocha, Tavily, etc.):
Common Symptomsβ
SSLError(SSLEOFError(8, '[SSL: UNEXPECTED_EOF_WHILE_READING]'))Max retries exceeded with url: /v1/web-search- Web search works in standalone Python scripts but fails in Open WebUI
Cause: HTTP Proxy Configured for HTTPS Trafficβ
This typically happens when you have an HTTP proxy configured for HTTPS traffic. The HTTP proxy cannot properly handle TLS connections, causing SSL handshake failures.
Check your environment for these variables:
HTTP_PROXY/http_proxyHTTPS_PROXY/https_proxy
If your https_proxy points to http://... (HTTP) instead of https://... (HTTPS), SSL handshakes will fail because the proxy terminates the connection unexpectedly.
Solutionsβ
- Fix proxy configuration: Use an HTTPS-capable proxy for HTTPS traffic, or configure your HTTP proxy to properly support CONNECT tunneling for SSL
- Bypass proxy for specific hosts: Set
NO_PROXYenvironment variable:NO_PROXY=api.bochaai.com,api.tavily.com,api.search.brave.com - Disable proxy if not needed: Unset the proxy environment variables entirely
Why Standalone Scripts Workβ
When you run a Python script directly, it may not inherit the same proxy environment variables that your Open WebUI service is using. The service typically inherits environment variables from systemd, Docker, or your shell profile, which may have different proxy settings.