Can't find `nvidia-smi` in 2.1.4-py3-tf-gpu
jayavanth opened this issue · comments
$ docker run --runtime=nvidia -it --rm gw000/keras:2.1.4-py3-tf-gpu nvidia-smi
docker: Error response from daemon: OCI runtime create failed: container_linux.go:296: starting container process caused "exec: \"nvidia-smi\": executable file not found in $PATH": unknown.
In the base image for GPU support (https://github.com/gw0/docker-debian-cuda) there was a major refactoring due to driver compatibility issues in some situations. The new image doesn't come anymore with the CUDA Driver libraries and tools (anything specific to your CUDA Kernel module).
This includes nvidia-smi
. Can you check, if it gets injected anywhere else when using the OCI runtime nvidia
? Something in the lines of:
$ docker run --runtime=nvidia -it --rm gw000/keras:2.1.4-py3-tf-gpu bash
$ find / -iname '*nvidia-smi*'
I see. There was no nvidia-smi
in that image. Looks like nvidia has not released any official docker image for debian: https://hub.docker.com/r/nvidia/cuda/tags/