Troubleshooting Ollama
Explore how to download, load, and use models with Ollama, both via Docker and Remote setups.
- Ollama Inside Docker
- BYO Ollama (External Ollama)
๐ณ Ollama Inside Dockerโ
If Ollama is deployed inside Docker (e.g., using Docker Compose or Kubernetes), the service will be available:
- Inside the container:
http://127.0.0.1:11434
- From the host:
http://localhost:11435
(if exposed via host network)
Step 1: Check Available Modelsโ
docker exec -it openwebui curl http://ollama:11434/v1/models
From the host (if exposed):
curl http://localhost:11435/v1/models
Step 2: Download Llama 3.2โ
docker exec -it ollama ollama pull llama3.2
You can also download a higher-quality version (8-bit) from Hugging Face:
docker exec -it ollama ollama pull hf.co/bartowski/Llama-3.2-3B-Instruct-GGUF:Q8_0
๐ ๏ธ Bring Your Own Ollama (BYO Ollama)โ
If Ollama is running on the host machine or another server on your network, follow these steps.
Step 1: Check Available Modelsโ
Local:
curl http://localhost:11434/v1/models
Remote:
curl http://<remote-ip>:11434/v1/models
Step 2: Set the OLLAMA_BASE_URLโ
export OLLAMA_HOST=<remote-ip>:11434
Step 3: Download Llama 3.2โ
ollama pull llama3.2
Or download the 8-bit version from Hugging Face:
ollama pull hf.co/bartowski/Llama-3.2-3B-Instruct-GGUF:Q8_0
You now have everything you need to download and run models with Ollama. Happy exploring!