caret :: train:指定mlpWeightDecay的其他非调整参数(RSNNS程序包) [英] caret::train: specify further non-tuning parameters for mlpWeightDecay (RSNNS package)

查看:178
本文介绍了caret :: train:指定mlpWeightDecay的其他非调整参数(RSNNS程序包)的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个问题,即使用RSNNS包中的插入符号包和方法"mlpWeightDecay"来指定学习率. "mlpWeightDecay"的调整参数是大小和衰减.

I have a problem with specifying the learning rate using the caret package with the method "mlpWeightDecay" from RSNNS package. The tuning parameters of "mlpWeightDecay" are size and decay.

一个示例,将大小保持恒定为4,并在c(0,0.0001,0.001,0.002)上调整衰减:

An example leaving size constant at 4 and tuning decay over c(0,0.0001, 0.001, 0.002):

data(iris)
TrainData <- iris[,1:4]
TrainClasses <- iris[,5]

fit1 <- train(TrainData, TrainClasses,
            method = "mlpWeightDecay",
            preProcess = c("center", "scale"),
            tuneGrid=expand.grid(.size = 4, .decay = c(0,0.0001, 0.001, 0.002)),
            trControl = trainControl(method = "cv")
)

但是我也想操纵模型的学习率,而不仅仅是采用默认的学习率0.2.

But I also want to manipulate the learning rate of the model and not just taking the default learning rate of 0.2.

我知道我可以通过"..."参数使用RSNNS的mlpWeightDecay方法的其他参数. "learnFuncParams"将是我需要插入的RSNNS参数.它包含4个参数(学习率,重量衰减,dmin,dmax).

I know that I can use further arguments of the mlpWeightDecay method from RSNNS via the "..." parameter. "learnFuncParams" would be the RSNNS parameter I would need to insert. It takes 4 parameters (learning rate, weight decay, dmin, dmax).

继续下面的示例,就像这样:

Going on with the example it looks like this:

fit1 <- train(TrainData, TrainClasses,
    method = "mlpWeightDecay",
    preProcess = c("center", "scale"),
    tuneGrid=expand.grid(.size = 4, .decay = c(0,0.0001, 0.001, 0.002)),
    trControl = trainControl(method = "cv"),
    learnFuncParams=c(0.4,0,0,0)
)

但插入符号训练功能的文档告诉我"..."参数:
传递给分类或回归例程的参数(例如randomForest).如果在此处传递调整参数的值,则会发生错误.

BUT the documentation of the caret train function tells me for the "..." parameter:
arguments passed to the classification or regression routine (such as randomForest). Errors will occur if values for tuning parameters are passed here.

问题在于4个"learningFuncParams"参数(权重衰减)之一是调整参数.

The problem is that one of the 4 "learningFuncParams" parameters (weight decay) IS a tuning parameter.

因此,我得到一个错误和警告:

Consequently I get an error and warnings:

train.default(TrainData,TrainClasses,method ="mlpWeightDecay",:中的错误: 无法确定最终调整参数 另外:有50个或更多警告(请使用warnings()查看前50个警告)

Error in train.default(TrainData, TrainClasses, method = "mlpWeightDecay", : final tuning parameters could not be determined In addition: There were 50 or more warnings (use warnings() to see the first 50)

警告消息:

1:在方法$ fit(x = if(!is.data.frame(x))as.data.frame(x)else x,...中: 您传入的"learnFuncParams"参数中的优先权重衰减值.保留其他值

1: In method$fit(x = if (!is.data.frame(x)) as.data.frame(x) else x, ... : Over-riding weight decay value in the 'learnFuncParams' argument you passed in. Other values are retained

2:在eval(expr,envir,enclos)中: Fold01的模型拟合失败:大小= 4,衰减= 0e + 00 mlp.default中的错误默认值(x = structure(list(Sepal.Length = c(-0.891390168709482,: 形式参数"learnFuncParams"与多个实际参数匹配

2: In eval(expr, envir, enclos) : model fit failed for Fold01: size=4, decay=0e+00 Error in mlp.default(x = structure(list(Sepal.Length = c(-0.891390168709482, : formal argument "learnFuncParams" matched by multiple actual arguments

如果将两个参数都设置在同一参数"learningFuncParams"中,那么如何在不与调整参数"decay"冲突的情况下设置学习速率?

How can I set the learning rate without coming in conflicts with the tuning parameter "decay" if both is set in the same parameter "learningFuncParams"?

谢谢!

推荐答案

似乎您可以在"..."中指定自己的learningFuncParams.插入符检查您是否提供了自己的参数集,并且将仅覆盖learningFuncParams [3](这是衰减).将使用您提供的learnFuncParams [1,2,4].

It looks like you can specify your own learnFuncParams in "...". caret checks if you've provided your own set of parameters and will only override learnFuncParams[3] (which is the decay). It will take the learnFuncParams[1,2,4] that you have provided.

找出插入符号的一种非常方便的方法是键入getModelInfo("mlpWeightDecay"),然后向上滚动至$ mlpWeightDecay $ fit部分.它显示了插入符号将如何调用真正的训练功能:

A very convenient way to find out what caret does is to type getModelInfo("mlpWeightDecay") and then scroll up to the $mlpWeightDecay$fit part. It shows how caret will call the real training function:

$mlpWeightDecay$fit
    if (any(names(theDots) == "learnFuncParams")) {
        prms <- theDots$learnFuncParams
        prms[3] <- param$decay
        warning("Over-riding weight decay value in the 'learnFuncParams' argument you passed in. Other values are retained")
    }

它检查您是否提供了自己的learnFuncParams.如果这样做,它会使用它,但会插入自己的衰减.您可以忽略该警告.

It checks if you've provided your own learnFuncParams. If you did, it uses it, but inserts its own decay. You can ignore the warning.

我认为您遇到的错误(无法确定最终调整参数")还有另一个原因.您是否尝试过降低学习速度?

I think the error you've got ("final tuning parameters could not be determined") has another reason. Have you tried a lower learning rate?

这篇关于caret :: train:指定mlpWeightDecay的其他非调整参数(RSNNS程序包)的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆