如何在XGBoost中释放GPU上的所有内存? [英] How do I free all memory on GPU in XGBoost?

查看:390
本文介绍了如何在XGBoost中释放GPU上的所有内存?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

这是我的代码:

clf = xgb.XGBClassifier(
  tree_method = 'gpu_hist',
  gpu_id = 0,
  n_gpus = 4,
  random_state = 55,
  n_jobs = -1
)
clf.set_params(**params)
clf.fit(X_train, y_train, **fit_params)

我已经阅读了这个问题和这个 git问题,但均无效.

I've read the answers on this question and this git issue but neither worked.

我试图通过这种方式删除助推器:

I tried to delete the booster in this way:

clf._Booster.__del__()
gc.collect()

它删除了增强器,但没有完全释放GPU内存.

It deletes the booster but doesn't completely free up GPU memory.

我想仍然是Dmatrix,但我不确定.

I guess it's Dmatrix that is still there but I am not sure.

如何释放整个内存?

推荐答案

好吧,我认为您无法通过某种方式访问​​已加载的Dmatrix,因为fit函数不会返回它. 您可以在此github链接上的此处查看源代码:

Well, I don't think there is a way that you can have access to the loaded Dmatrix cause the fit function doesn't return it. you can check the source code here on this github link:

所以我认为最好的方法是将其包装在Process中并以这种方式运行,如下所示:

So I think the best way is to wrap it in a Process and run it that way, like this:

from multiprocessing import Process

def fitting(args):
    clf = xgb.XGBClassifier(tree_method = 'gpu_hist',gpu_id = 0,n_gpus = 4, random_state = 55,n_jobs = -1)
    clf.set_params(**params)
    clf.fit(X_train, y_train, **fit_params)

    #save the model here on the disk

fitting_process = Process(target=fitting, args=(args))
fitting process.start()
fitting_process.join()

# load the model from the disk here

这篇关于如何在XGBoost中释放GPU上的所有内存?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆