在docker容器中使用GPU-CUDA版本:N/A和torch.cuda.is_available返回False [英] Using GPU inside docker container - CUDA Version: N/A and torch.cuda.is_available returns False

查看:88
本文介绍了在docker容器中使用GPU-CUDA版本:N/A和torch.cuda.is_available返回False的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试从Docker容器内部使用GPU.我正在Ubuntu 18.04上使用版本19.03的Docker.

I'm trying to use GPU from inside my docker container. I'm using docker with version 19.03 on Ubuntu 18.04.

如果我运行nvidia-smi,则在docker容器之外,我得到以下输出.

Outside the docker container if I run nvidia-smi I get the below output.

+-----------------------------------------------------------------------------+
| NVIDIA-SMI 450.51.05    Driver Version: 450.51.05    CUDA Version: 11.0     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  Tesla T4            On   | 00000000:00:1E.0 Off |                    0 |
| N/A   30C    P8     9W /  70W |      0MiB / 15109MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+

如果我在从nvidia/cuda docker image创建的容器中运行相同的东西,我将获得与上面相同的输出,并且一切运行顺利.torch.cuda.is_available()返回 True .

If I run the samething inside the container created from nvidia/cuda docker image, I get the same output as above and everything is running smoothly. torch.cuda.is_available() returns True.

但是,如果我在其他任何docker容器中运行相同的nvidia-smi命令,它将给出以下输出,您可以看到CUDA版本以 N/A 的形式出现.在容器内 torch.cuda.is_available()还会返回 False .

But If I run the same nvidia-smi command inside any other docker container, it gives the following output where you can see that the CUDA Version is coming as N/A. Inside the containers torch.cuda.is_available() also returns False.

+-----------------------------------------------------------------------------+
| NVIDIA-SMI 450.51.05    Driver Version: 450.51.05    CUDA Version: N/A      |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  Tesla T4            On   | 00000000:00:1E.0 Off |                    0 |
| N/A   30C    P8     9W /  70W |      0MiB / 15109MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+

我已经使用以下命令安装了nvidia-container-toolkit.

I have installed nvidia-container-toolkit using the following commands.

curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add -
curl -s -L https://nvidia.github.io/nvidia-docker/ubuntu18.04/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list
sudo apt-get update
sudo apt-get install nvidia-container-toolkit
sudo systemctl restart docker

我使用以下命令启动了容器

I started my containers using the following commands

sudo docker run --rm --gpus all nvidia/cuda nvidia-smi
sudo docker run -it --rm --gpus all ubuntu nvidia-smi

推荐答案

docker run --rm --gpus所有nvidia/cuda nvidia-smi 不应返回 CUDA版本:N/如果所有内容(又名nvidia驱动程序,CUDA工具包和nvidia-container-toolkit)都已正确安装在主机上,则为A .

鉴于 docker run --rm --gpus所有nvidia/cuda nvidia-smi 正确返回.我在容器内部的 CUDA版本:N/A 上也遇到了问题,我在解决问题上很幸运:

Given that docker run --rm --gpus all nvidia/cuda nvidia-smi returns correctly. I also had problem with CUDA Version: N/A inside of the container, which I had luck in solving:

请查看我的回答 https://stackoverflow.com/a/64422438/2202107 (显然,您需要进行调整并安装所有版本的匹配/正确版本)

Please see my answer https://stackoverflow.com/a/64422438/2202107 (obviously you need to adjust and install the matching/correct versions of everything)

这篇关于在docker容器中使用GPU-CUDA版本:N/A和torch.cuda.is_available返回False的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆