查找具有足够内存的GPU [英] Find a GPU with enough memory

查看:85
本文介绍了查找具有足够内存的GPU的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想以编程方式找出可用的GPU及其当前的内存使用情况,并根据其内存可用性使用其中一个GPU.我想在PyTorch中做到这一点.

I want to programmatically find out the available GPUs and their current memory usage and use one of the GPUs based on their memory availability. I want to do this in PyTorch.

在此帖子:

import torch.cuda as cutorch

for i in range(cutorch.device_count()):
    if cutorch.getMemoryUsage(i) > MEM: 
        opts.gpuID = i
        break

,但在PyTorch 0.3.1中不起作用(没有调用getMemoryUsage的函数).我对基于PyTorch(使用库函数)的解决方案感兴趣.任何帮助将不胜感激.

but it is not working in PyTorch 0.3.1 (there is no function called, getMemoryUsage). I am interested in a PyTorch based (using the library functions) solution. Any help would be appreciated.

推荐答案

在您提供的网页中,存在一个答案:

In the webpage you give, there exist an answer:

#!/usr/bin/env python
# encoding: utf-8

import subprocess

def get_gpu_memory_map():
    """Get the current gpu usage.

    Returns
    -------
    usage: dict
        Keys are device ids as integers.
        Values are memory usage as integers in MB.
    """
    result = subprocess.check_output(
        [
            'nvidia-smi', '--query-gpu=memory.used',
            '--format=csv,nounits,noheader'
        ])
    # Convert lines into a dictionary
    gpu_memory = [int(x) for x in result.strip().split('\n')]
    gpu_memory_map = dict(zip(range(len(gpu_memory)), gpu_memory))
    return gpu_memory_map

print get_gpu_memory_map()  

这篇关于查找具有足够内存的GPU的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆