Skip to main content

Troubleshooting Ollama

Explore how to download, load, and use models with Ollama, both via Docker and Remote setups.


๐Ÿณ Ollama Inside Dockerโ€‹

If Ollama is deployed inside Docker (e.g., using Docker Compose or Kubernetes), the service will be available:

  • Inside the container: http://127.0.0.1:11434
  • From the host: http://localhost:11435 (if exposed via host network)

Step 1: Check Available Modelsโ€‹

docker exec -it openwebui curl http://ollama:11434/v1/models

From the host (if exposed):

curl http://localhost:11435/v1/models

Step 2: Download Llama 3.2โ€‹

docker exec -it ollama ollama pull llama3.2

You can also download a higher-quality version (8-bit) from Hugging Face:

docker exec -it ollama ollama pull hf.co/bartowski/Llama-3.2-3B-Instruct-GGUF:Q8_0

You now have everything you need to download and run models with Ollama. Happy exploring!