使用具有GPU支持的XGBClassifier不能加速 [英] No speedup using XGBClassifier with GPU support

查看:113
本文介绍了使用具有GPU支持的XGBClassifier不能加速的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

在下面的代码中,我尝试搜索xgboost的不同超参数.

In the following code, I try to search over different hyper-parameters of xgboost.

param_test1 = {
 'max_depth':list(range(3,10,2)),
 'min_child_weight':list(range(1,6,2))
}
predictors = [x for x in train_data.columns if x not in ['target', 'id']]
gsearch1 = GridSearchCV(estimator=XGBClassifier(learning_rate =0.1, n_estimators=100, max_depth=5,
                                                min_child_weight=1, gamma=0, subsample=0.8, colsample_bytree=0.8,
                                                objective= 'binary:logistic', n_jobs=4, scale_pos_weight=1, seed=27, 
                                                kvargs={'tree_method':'gpu_hist'}),
                    param_grid=param_test1, scoring='roc_auc', n_jobs=4, iid=False, cv=5, verbose=2)
gsearch1.fit(train_data[predictors], train_data['target'])

即使我使用 kvargs = {tree_method':'gpu_hist'} ,我也没有实现任何加速.根据 nvidia-smi ,GPU在计算中的参与并不多:

Even though I use kvargs={tree_method':'gpu_hist'}, I get no speedup in the implementation. According to the nvidia-smi, the GPU is not much involved in the computation:

+-----------------------------------------------------------------------------+
| NVIDIA-SMI 375.66                 Driver Version: 375.66                    |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GTX 1080    Off  | 0000:01:00.0      On |                  N/A |
|  0%   39C    P8    10W / 200W |    338MiB /  8112MiB |      1%      Default |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID  Type  Process name                               Usage      |
|=============================================================================|
|    0       961    G   /usr/lib/xorg/Xorg                             210MiB |
|    0      1675    G   compiz                                         124MiB |
|    0      2359    G   /usr/lib/firefox/firefox                         2MiB |
+-----------------------------------------------------------------------------+

我已经在Ubuntu中使用以下命令安装了GPU支持的xgboost:

I have installed the GPU supported xgboost using the following commands in Ubuntu:

$ git clone --recursive https://github.com/dmlc/xgboost
$ mkdir build
$ cd build
$ cmake .. -DUSE_CUDA=ON
$ make -j

可能的原因是什么?

推荐答案

我知道有点晚了,但是,如果正确安装了cuda,则以下代码应该可以工作:

I know its a bit late, but still, If the installation of cuda is done correctly, the following code should work:

没有GridSearch:

Without GridSearch:

import xgboost

xgb = xgboost.XGBClassifier(n_estimators=200, tree_method='gpu_hist', predictor='gpu_predictor')
xgb.fit(X_train, y_train)

使用GridSearch:

With GridSearch:

params = {
        'max_depth': [3,4,5,6,7,8,10],
        'learning_rate':[0.001, 0.003, 0.01,0.03, 0.1,0.3],
        'n_estimators':[50,100,200,300,500,1000],
        .... whatever ....
}
xgb = xgboost.XGBClassifier(tree_method='gpu_hist', predictor='gpu_predictor')
tuner = GridSearchCV(xgb, params=params)
tuner.fit(X_train, y_train)

# OR you can pass them in params also.

这篇关于使用具有GPU支持的XGBClassifier不能加速的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆