I wanted to run a local LLM using Docker to deploy Ollama and Open WebUI.
Since my machine has a GPU, I documented the steps to integrate Docker (containers) with the GPU.
Conclusion
Avoid using Docker Desktop. Switching to Docker Engine resolved the issue.
This post skips over the installation of NVIDIA drivers and the NVIDIA Container Toolkit (formerly NVIDIA Container Runtime).
Failure Case: Using Docker Desktop
Initially, I installed Docker Desktop.
After installing the drivers and NVIDIA Container Toolkit, I checked the available runtimes using docker info | grep Runtime
:
Runtimes: runc io.containerd.runc.v2
Default Runtime: runc
The NVIDIA runtime was missing. Reinstalling Docker Desktop didn’t resolve the issue.
Success Case: Using Docker Engine
I uninstalled Docker Desktop and its related packages and installed Docker Engine instead.
After installing the drivers and NVIDIA Container Toolkit (as I did with Docker Desktop), I rechecked the available runtimes:
Runtimes: runc io.containerd.runc.v2 nvidia
Default Runtime: runc
The NVIDIA runtime was successfully detected.
I then configured the default runtime by editing /etc/docker/daemon.json
:
{
"default-runtime": "nvidia",
"runtimes": {
"nvidia": {
"args": [],
"path": "nvidia-container-runtime"
}
}
}
Adding this configuration eliminates the need to specify the --gpus
option for every container.
It appears that Docker Desktop is incompatible with NVIDIA runtimes, at least in this setup.