Tensorflow:如何在模型训练期间实时监控GPU性能? [英] Tensorflow: How do you monitor GPU performance during model training in real-time?
问题描述
我是Ubuntu和GPU的新手,最近在我们的实验室中使用了带有Ubuntu 16.04和4个NVIDIA 1080ti GPU的新PC.该机器还具有i7 16核心处理器.
I am new to Ubuntu and GPUs and have recently been using a new PC with Ubuntu 16.04 and 4 NVIDIA 1080ti GPUs in our lab. The machine also has an i7 16 core processor.
我有一些基本问题:
-
已为GPU安装了Tensorflow.然后,我认为它会自动确定GPU使用的优先级?如果是这样,它是一起使用全部4个,还是一起使用1个,然后在需要时再招募另一个?
Tensorflow is installed for GPU. I presume then, that it automatically prioritises GPU usage? If so, does it use all 4 together or does it use 1 and then recruit another if needed?
我可以在模型训练期间实时监控GPU的使用/活动吗?
Can I monitor in real-time, the GPU use/activity during training of a model?
我完全理解这是基本的硬件内容,但是对这些特定问题的明确明确答案将是不错的.
I fully understand this is basic hardware stuff but clear definitive answers to these specific questions would be great.
基于此输出-这真的表示我的每个GPU上的几乎所有内存都在使用吗?
Based on this output - it this really saying that nearly all the memory on each one of my GPUs is being used?
推荐答案
-
Tensorflow不会自动使用所有GPU,它将仅使用一个GPU,特别是第一个GPU
/gpu:0
您必须编写多个GPU代码才能利用所有可用的GPU. cifar mutli-gpu示例
You have to write multi gpus code to utilize all gpus available. cifar mutli-gpu example
每隔0.1秒检查一次使用情况
to check usage every 0.1 seconds
watch -n0.1 nvidia-smi
这篇关于Tensorflow:如何在模型训练期间实时监控GPU性能?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!