R-MLR:为已包装的学习者调整超参数 [英] R-MLR : get tuned hyperparameters for a wrapped learner
问题描述
我正在使用mlr
包在R中构建xgboost
分类任务:
I'm building an xgboost
classification task in R using the mlr
package :
# define task
Task <- mlr::makeClassifTask(id = "classif.xgboost",
data = df,
target = "response",
weights = NULL,
positive = "yes",
check.data = TRUE,
blocking = folds)
# make a base learner
lrnBase <- makeLearner(cl = "classif.xgboost",
predict.type = "prob",
# "response" (= labels) or "prob" (= labels and probabilities)
predict.threshold = NULL
)
我必须对我的一门课进行欠采样:
I have to undersample one of my classes :
lrnUnder <- makeUndersampleWrapper(learner = lrnBase, usw.rate = 0.2, usw.cl = "no")
我还必须调整学习者的一些超参数:
I also have to tune some of the learner's hyperparameters:
paramSet <- makeParamSet(makeNumericParam(id = "eta", lower = 0.005, upper = 0.4),
makeIntegerParam(id = "nrounds", lower = 1, upper = 100))
tuneControl <- makeTuneControlRandom(maxit = 100)
resampin <- makeResampleDesc(method = "CV",
iters = 4L,
predict = "test")
lrnTune <- makeTuneWrapper(learner = lrnUnder,
resampling = resampin,
measures = fp,
par.set = paramSet,
control = tuneControl)
我的第一个问题是,如何获得最终调整的超参数(而不是与CV的每次迭代对应的未调整的超参数,所以不能按mlr
教程中,我发现我必须train
我的模型如下:
My first question is that how can I get the FINAL tuned hyper-parameters (and not tuned hyper-parametrs corresponding to each iteration of CV so not by extract
argument) ? In the mlr
tutorial I found out that I have to train
my model as follows :
mdl <- mlr::train(learner = lrnTune, task = Task)
getTuneResult(mdl)
,但是如果没有nested resampling
,这将不起作用.因此,当我将此代码块添加到我的代码中时,它可以正常工作:
but this does not work without a nested resampling
. So when I add this block to my code it works :
resampout.desc <- makeResampleDesc(method = "CV",
iters = length(levels(folds)),
predict = "both",
fixed = TRUE)
resampout <- makeResampleInstance(desc = resampout.desc, task = Task)
resamp <- mlr::resample(learner = lrnTune,
task = Task,
resampling = resampout, # outer
measures = f1,
models = FALSE,
extract = getTuneResult,
keep.pred = TRUE)
我的第二个问题是,原则上,如果我不想进行嵌套的重采样(即评估我的模型)?还是我可以简单地组成一个无包装的学习者并使用tuneParams
进行调整?
My second question is that, in principal, do I have to wrap my learner if I don't want to do a nested resampling (i.e evaluate the performance of my model) ? Or can I simply make a non-wrapped learner and perform my tuning using tuneParams
?
由于我对包装式学习器的功能和嵌套的重采样感到有些困惑,所以在此先感谢您的帮助.
Thank you in advance for your help since I got a bit confused about the functionality of wrapped learners and the nested resampling.
推荐答案
您可以使用tuneParams()调整学习者的知识,然后按照本教程中的说明提取最佳的超参数(
You can use tuneParams() to tune a learner and then extract the best hyperparameters as described in the tutorial (https://mlr.mlr-org.com/articles/tutorial/tune.html). You certainly don't have to wrap your learner; the point of doing this is so you can simply train a model without having to worry about what the hyperparameters are. You should do a nested resampling though as otherwise your performance estimated may be biased.
这篇关于R-MLR:为已包装的学习者调整超参数的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!